A Novel UCP Model Based on Artificial Neural Networks and Orthogonal Arrays
Abstract
:1. Introduction
- An approach based on the analysis of functional points to estimate the magnitude of the functionality of the software being developed. Within this approach, two models were initially distinguished: IFPUG (created by the International Function Point Users Group) [6] and Mk II (Mark II) [7]. Subsequently, within the IFPUG model, the following were developed: NESMA NESMA (created by the Netherlands Software Metrics Association) [8], IFPUG 4.1, and COSMIC FP (the COmmon Software Measurement International Consortium function point) [9].
1.1. Use Case Point Analysis
1.2. Taguchi Orthogonal Arrays
2. Related Work
- Examination of the influence of two linearly dependent input values (UUCP and AUCP) on the change in the MMRE value;
- Comparative analysis of two different architectures of artificial neural networks and the obtained results;
- Division of the used dataset to a scale of 70:30, i.e., 70 projects from the selected dataset were used for the training process, while 30 were used for the testing process;
- Finding the most efficient methods of encoding and decoding input values, such as the fuzzification method;
- The requirement of a minimum number of performed experiments;
- Testing and validation on other datasets.
3. New, Improved UCP—Our Approach
- 1.
- UCP and ANN-L16
- 2.
- UCP and ANN-L36prim
- Training of two different ANN architectures constructed according to the corresponding Taguchi orthogonal vector plans (ANN-L16 and ANN36prim);
- Testing of the ANN that gave the best results (the lowest MMRE value) in the first part of the experiment, for two proposed architectures on the same dataset;
- Validation of the ANN that gave the best results (the lowest MMRE value) in the first part of the experiment, for each selected architecture, but using different datasets.
3.1. Data Sets Used in the UCP Approach
3.2. The Methodology Used within the Improved UCP Model
- Step 1: Input layer
- Step 2:
- Step 3:
W1L2 = cost9 + cost10 +…+ cost16
…
W15L1 = cost1 + cost6 +…+ cost16
W15L2 = cost2 + cost3 +…+ cost15
where cost(i) = Σ MRE(ANN(i))
W1L2new = W1L2old + (W1L3old − W1L2old)/2
W1L3new = W1L3old
- Step 4:
- Step 5:
- Step 6:
- The influence of the input parameter UUCP and its value are calculated as (37):δ1 = mean(MMRE) − mean(MMRE1)
- The influence of the input parameter AUCP and its value are calculated as (38):δ2 = mean(MMRE) − mean(MMRE2) when AUCP = 0;
- Step 7: Correlation, Prediction
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Fadhil, A.A.; Alsarraj, R.G.H.; Altaie, A.M. Software Cost Estimation Based on Dolphin Algorithm. IEEE Access 2020, 8, 75279–75287. [Google Scholar] [CrossRef]
- Stoica, A.; Blosiu, J. Neural Learning using orthogonal arrays. Adv. Intell. Syst. 1997, 41, 418. [Google Scholar]
- Khaw, J.F.; Lim, B.; Lim, L.E. Optimal design of neural networks using the Taguchi method. Neurocomputing 1995, 7, 225–245. [Google Scholar] [CrossRef]
- Rankovic, N.; Rankovic, D.; Ivanovic, M.; Lazic, L. A New Approach to Software Effort Estimation Using Different Artificial Neural Network Architectures and Taguchi Orthogonal Arrays. IEEE Access 2021, 9, 26926–26936. [Google Scholar] [CrossRef]
- Langsari, K.; Sarno, R. Optimizing effort and time parameters of COCOMO II estimation using fuzzy multi-objective PSO. In Proceedings of the 2017 4th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), Yogyakarta, Indonesia, 19–21 September 2017; pp. 1–6. [Google Scholar] [CrossRef]
- Quesada-López, C.; Madrigal-Sánchez, D.; Jenkins, M. An Empirical Analysis of IFPUG FPA and COSMIC FP Measurement Methods. In International Conference on Information Technology & Systems, Bogota, Colombia, 5–7 February 2020; Springer: Cham, Switzerland, 2020; pp. 265–274. [Google Scholar]
- Symons, C. Function point analysis: Difficulties and improvements. IEEE Trans. Softw. Eng. 1988, 14, 2–11. [Google Scholar] [CrossRef]
- Lavazza, L.; Liu, G. An Empirical Evaluation of the Accuracy of NESMA Function Points Estimates. ICSEA 2019, 2019, 36. [Google Scholar]
- Ochodek, M.; Kopczyńska, S.; Staron, M. Deep learning model for end-to-end approximation of COSMIC functional size based on use-case names. Inf. Softw. Technol. 2020, 123, 106310. [Google Scholar] [CrossRef]
- Kläs, M.; Trendowicz, A.; Wickenkamp, A.; Münch, J.; Kikuchi, N.; Ishigai, Y. Chapter 4 the Use of Simulation Techniques for Hybrid Software Cost Estimation and Risk Analysis. Adv. Organomet. Chem. 2008, 74, 115–174. [Google Scholar] [CrossRef]
- Kirmani, M.M.; Wahid, A. Revised use case point (re-ucp) model for software effort estimation. Int. J. Adv. Comput. Sci. Appl. 2015, 6, 65–71. [Google Scholar]
- Azzeh, M.; Nassif, A.B.; Banitaan, S. Comparative analysis of soft computing techniques for predicting software effort based use case points. IET Softw. 2018, 12, 19–29. [Google Scholar] [CrossRef]
- Grover, M.; Bhatia, P.K.; Mittal, H. Estimating Software Test Effort Based on Revised UCP Model Using Fuzzy Technique. In International Conference on Information and Communication Technology for Intelligent Systems, Ahmedabad, India, 25–26 March 2017; Springer: Cham, Switzerland, 2017; pp. 490–498. [Google Scholar]
- Ani, Z.C.; Basri, S.; Sarlan, A. A reusability assessment of UCP-based effort estimation framework using object-oriented approach. J. Telecommun. Electron. Comput. Eng. JTEC 2017, 9, 111–114. [Google Scholar]
- Sharma, P.; Singh, J. Systematic Literature Review on Software Effort Estimation Using Machine Learning Approaches. In Proceedings of the 2017 International Conference on Next Generation Computing and Information Systems (ICNGCIS), Jammu, India, 11–12 December 2017; pp. 43–47. [Google Scholar]
- Gustav, K. Resource Estimation for Objectory Projects; Objective Systems SF AB: Kista, Sweden, 1993; pp. 1–9. [Google Scholar]
- Nassif, A.B.; Capretz, L.F.; Ho, D. Enhancing Use Case Points Estimation Method using Soft Computing Techniques. J. Glob. Res. Comput. Sci. 2010, 1, 12–21. [Google Scholar]
- Couellan, N. Probabilistic robustness estimates for feed-forward neural networks. Neural Netw. 2021, 142, 138–147. [Google Scholar] [CrossRef] [PubMed]
- Mukherjee, S.; Malu, R.K. Optimization of project effort estimate using neural network. In Proceedings of the 2014 IEEE International Conference on Advanced Communications, Control and Computing Technologies, Ramanathapuram, India, 8–10 May 2014; pp. 406–410. [Google Scholar]
- Dar, A.A.; Anuradha, N. Use of orthogonal arrays and design of experiment via Taguchi L9 method in probability of default. Accounting 2018, 4, 113–122. [Google Scholar] [CrossRef]
- Alshibli, M.; El Sayed, A.; Kongar, E.; Sobh, T.; Gupta, S.M. A Robust Robotic Disassembly Sequence Design Using Orthogonal Arrays and Task Allocation. Robotics 2019, 8, 20. [Google Scholar] [CrossRef] [Green Version]
- Carroll, E.R. Estimating software based on use case points. In Proceedings of the 20th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications—OOPSLA ’05, San Diego, CA, USA, 16–20 October 2005; pp. 257–265. [Google Scholar]
- Nassif, A.B. Software Size and Effort Estimation from Use Case Diagrams Using Regression and Soft Computing Models. Ph.D. Thesis, Western University, London, ON, Canada, 2012. [Google Scholar]
- Azzeh, M. Fuzzy Model Tree for Early Effort Estimation Machine Learning and Applications. In Proceedings of the 12th International Conference on Machine Learning and Applications, Miami, FL, USA, 4–7 December 2013; pp. 117–121. [Google Scholar]
- Urbanek, T.; Prokopova, Z.; Silhavy, R.; Sehnalek, S. Using Analytical Programming and UCP Method for Effort Estimation. In Modern Trends and Techniques in Computer Science; Advances in Intelligent Systems and Computing; Springer: Berlin/Heidelberg, Germany, 2014; Volume 285, pp. 571–581. [Google Scholar]
- Kaur, A.; Kaur, K. Effort Estimation for Mobile Applications Using Use Case Point (UCP). In Smart Innovations in Communication and Computational Sciences; Springer: Singapore, 2019; pp. 163–172. [Google Scholar]
- Mahmood, Y.; Kama, N.; Azmi, A. A systematic review of studies on use case points and expert-based estimation of software development effort. J. Softw. Evol. Process. 2020, 32, e2245. [Google Scholar] [CrossRef]
- Gebretsadik, K.K.; Sewunetie, W.T. Designing Machine Learning Method for Software Project Effort Prediction. Comput. Sci. Eng. 2019, 9, 6–11. [Google Scholar]
- Alves, R.; Valente, P.; Nunes, N.J. Improving software effort estimation with human-centric models: A comparison of UCP and iUCP accuracy. In Proceedings of the 5th ACM SIGCHI Symposium on Engineering Interactive Computing Systems, London, UK, 24–27 June 2013; pp. 287–296. [Google Scholar]
- Rankovic, D.; Rankovic, N.; Ivanovic, M.; Lazic, L. Convergence rate of Artificial Neural Networks for estimation in software development projects. Inf. Softw. Technol. 2021, 138, 106627. [Google Scholar] [CrossRef]
- Rankovic, N.; Rankovic, D.; Ivanovic, M.; Lazic, L. Improved Effort and Cost Estimation Model Using Artificial Neural Networks and Taguchi Method with Different Activation Functions. Entropy 2021, 23, 854. [Google Scholar] [CrossRef] [PubMed]
- Available online: https://data.mendeley.com/datasets/2rfkjhx3cn/1 (accessed on 4 February 2020).
- Lam, H. A review on stability analysis of continuous-time fuzzy-model-based control systems: From membership-function-independent to membership-function-dependent analysis. Eng. Appl. Artif. Intell. 2018, 67, 390–408. [Google Scholar] [CrossRef] [Green Version]
- Ghosh, L.; Saha, S.; Konar, A. Decoding emotional changes of android-gamers using a fused Type-2 fuzzy deep neural network. Comput. Hum. Behav. 2021, 116, 106640. [Google Scholar] [CrossRef]
- Ritu, O.P. Software Quality Prediction Method Using Fuzzy Logic. Turk. J. Comput. Math. Educ. 2021, 12, 807–817. [Google Scholar]
- Boldin, M.V. On the Power of Pearson’s Test under Local Alternatives in Autoregression with Outliers. Math. Methods Stat. 2019, 28, 57–65. [Google Scholar] [CrossRef]
- Blum, D.; Holling, H. Spearman’s law of diminishing returns. A meta-analysis. Intelligence 2017, 65, 60–66. [Google Scholar] [CrossRef]
- Wang, H.; Wang, Z. Deterministic and probabilistic life-cycle cost analysis of pavement overlays with different pre-overlay conditions. Road Mater. Pavement Des. 2019, 20, 58–73. [Google Scholar] [CrossRef]
- Qiao, L. Deep learning based software defect prediction. Neurocomputing 2020, 385, 100–110. [Google Scholar] [CrossRef]
- Shah, M.A. Ensembling Artificial Bee Colony with Analogy-Based Estimation to Improve Software Development Effort Prediction. IEEE Access 2020, 8, 58402–58415. [Google Scholar] [CrossRef]
- Manali, P.; Maity, R.; Ratnam, J.V.; Nonaka, M.; Behera, S.K. Long-lead prediction of ENSO modoki index using machine learning algorithms. Sci. Rep. 2020, 10, 365. [Google Scholar]
ANN-L16 | W1 | W2 | W3 | W4 | W5 | W6 | W7 | W8 | W9 | W10 | W11 | W12 | W13 | W14 | W15 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ANN1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 |
ANN2 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L2 | L2 | L2 | L2 | L2 | L2 | L2 | L2 |
ANN3 | L1 | L1 | L1 | L2 | L2 | L2 | L2 | L1 | L1 | L1 | L1 | L2 | L2 | L2 | L2 |
ANN4 | L1 | L1 | L1 | L1 | L2 | L2 | L2 | L2 | L2 | L2 | L2 | L1 | L1 | L1 | L1 |
ANN5 | L1 | L2 | L2 | L1 | L1 | L2 | L2 | L1 | L1 | L2 | L2 | L1 | L1 | L2 | L2 |
ANN6 | L1 | L2 | L2 | L1 | L1 | L2 | L2 | L2 | L2 | L1 | L1 | L2 | L2 | L1 | L1 |
ANN7 | L1 | L2 | L2 | L2 | L2 | L1 | L1 | L1 | L1 | L2 | L2 | L2 | L2 | L1 | L1 |
ANN8 | L1 | L2 | L2 | L2 | L2 | L1 | L1 | L2 | L2 | L1 | L1 | L1 | L1 | L2 | L2 |
ANN9 | L2 | L1 | L2 | L1 | L2 | L1 | L2 | L1 | L2 | L1 | L2 | L1 | L2 | L1 | L2 |
ANN10 | L2 | L1 | L2 | L1 | L2 | L1 | L2 | L2 | L1 | L2 | L1 | L2 | L1 | L2 | L1 |
ANN11 | L2 | L1 | L2 | L2 | L1 | L2 | L1 | L1 | L2 | L1 | L2 | L2 | L1 | L2 | L1 |
ANN12 | L2 | L1 | L2 | L2 | L1 | L2 | L1 | L2 | L1 | L2 | L1 | L1 | L2 | L1 | L2 |
ANN13 | L2 | L2 | L1 | L1 | L2 | L2 | L1 | L1 | L2 | L2 | L1 | L1 | L2 | L2 | L1 |
ANN14 | L2 | L2 | L1 | L1 | L2 | L2 | L1 | L2 | L1 | L1 | L2 | L2 | L1 | L1 | L2 |
ANN15 | L2 | L2 | L1 | L2 | L1 | L1 | L2 | L1 | L2 | L2 | L1 | L2 | L1 | L1 | L2 |
ANN16 | L2 | L2 | L1 | L2 | L1 | L1 | L2 | L2 | L1 | L1 | L2 | L1 | L2 | L2 | L1 |
ANN-L36prim | W1 | W2 | W3 | W4 | W5 | W6 | W7 | W8 | W9 | W10 | W11 | W12 | W13 | W14 | W15 | W16 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ANN1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 | L1 |
ANN2 | L2 | L2 | L2 | L2 | L2 | L2 | L2 | L2 | L2 | L2 | L2 | L2 | L1 | L1 | L1 | L1 |
ANN3 | L3 | L3 | L3 | L3 | L3 | L3 | L3 | L3 | L3 | L3 | L3 | L3 | L1 | L1 | L1 | L1 |
ANN4 | L1 | L1 | L1 | L1 | L2 | L2 | L2 | L2 | L3 | L3 | L3 | L3 | L1 | L2 | L2 | L1 |
ANN5 | L1 | L1 | L1 | L1 | L3 | L3 | L3 | L3 | L2 | L2 | L2 | L2 | L1 | L2 | L2 | L1 |
ANN6 | L3 | L3 | L3 | L3 | L1 | L1 | L1 | L1 | L2 | L2 | L2 | L2 | L1 | L2 | L2 | L1 |
ANN7 | L1 | L1 | L2 | L3 | L1 | L2 | L3 | L3 | L1 | L1 | L1 | L3 | L2 | L1 | L2 | L1 |
ANN8 | L2 | L2 | L3 | L1 | L2 | L3 | L1 | L1 | L2 | L3 | L3 | L1 | L2 | L1 | L2 | L1 |
ANN9 | L3 | L3 | L1 | L2 | L3 | L1 | L2 | L2 | L3 | L1 | L1 | L2 | L2 | L1 | L2 | L1 |
ANN10 | L1 | L1 | L3 | L2 | L1 | L3 | L2 | L3 | L2 | L1 | L3 | L2 | L2 | L2 | L1 | L1 |
ANN11 | L2 | L2 | L1 | L3 | L2 | L1 | L3 | L1 | L3 | L2 | L1 | L3 | L2 | L2 | L1 | L1 |
ANN12 | L3 | L3 | L2 | L1 | L3 | L2 | L1 | L2 | L1 | L3 | L2 | L1 | L2 | L2 | L1 | L1 |
ANN13 | L1 | L2 | L3 | L1 | L3 | L2 | L1 | L3 | L3 | L2 | L1 | L2 | L1 | L1 | L1 | L2 |
ANN14 | L2 | L3 | L1 | L2 | L1 | L3 | L2 | L1 | L1 | L3 | L2 | L3 | L1 | L1 | L1 | L2 |
ANN15 | L3 | L1 | L2 | L3 | L2 | L1 | L3 | L2 | L2 | L1 | L3 | L1 | L1 | L1 | L1 | L2 |
ANN16 | L1 | L2 | L3 | L2 | L1 | L1 | L3 | L2 | L3 | L3 | L2 | L1 | L1 | L2 | L2 | L2 |
ANN17 | L2 | L3 | L1 | L3 | L2 | L2 | L1 | L3 | L1 | L1 | L3 | L2 | L1 | L2 | L2 | L2 |
ANN18 | L3 | L1 | L2 | L1 | L3 | L3 | L2 | L1 | L2 | L2 | L1 | L3 | L1 | L2 | L2 | L2 |
ANN19 | L1 | L2 | L1 | L3 | L3 | L3 | L1 | L2 | L2 | L1 | L2 | L3 | L2 | L1 | L2 | L2 |
ANN20 | L2 | L3 | L2 | L1 | L1 | L1 | L2 | L3 | L3 | L2 | L3 | L1 | L2 | L1 | L2 | L2 |
ANN21 | L3 | L1 | L3 | L2 | L2 | L2 | L3 | L1 | L1 | L3 | L1 | L2 | L2 | L1 | L2 | L2 |
ANN22 | L1 | L2 | L2 | L3 | L3 | L1 | L2 | L1 | L1 | L3 | L3 | L2 | L2 | L2 | L1 | L2 |
ANN23 | L2 | L3 | L3 | L1 | L1 | L2 | L3 | L2 | L2 | L1 | L1 | L3 | L2 | L2 | L1 | L2 |
ANN24 | L3 | L1 | L1 | L2 | L2 | L3 | L1 | L3 | L3 | L2 | L2 | L1 | L2 | L2 | L1 | L2 |
ANN25 | L1 | L3 | L2 | L1 | L2 | L3 | L3 | L1 | L3 | L1 | L2 | L2 | L1 | L1 | L1 | L3 |
ANN26 | L2 | L1 | L3 | L2 | L3 | L1 | L1 | L2 | L1 | L2 | L3 | L3 | L1 | L1 | L1 | L3 |
ANN27 | L3 | L2 | L1 | L3 | L1 | L2 | L2 | L3 | L2 | L3 | L1 | L1 | L1 | L1 | L1 | L3 |
ANN28 | L1 | L3 | L2 | L2 | L2 | L1 | L1 | L3 | L2 | L3 | L1 | L3 | L1 | L2 | L2 | L3 |
ANN29 | L2 | L1 | L3 | L3 | L3 | L2 | L2 | L1 | L3 | L1 | L2 | L1 | L1 | L2 | L2 | L3 |
ANN30 | L3 | L2 | L1 | L1 | L1 | L3 | L3 | L2 | L1 | L2 | L3 | L2 | L1 | L2 | L2 | L3 |
ANN31 | L1 | L3 | L3 | L3 | L2 | L3 | L2 | L2 | L1 | L2 | L1 | L1 | L2 | L1 | L2 | L3 |
ANN32 | L2 | L1 | L1 | L1 | L3 | L1 | L3 | L3 | L3 | L3 | L2 | L2 | L2 | L1 | L2 | L3 |
ANN33 | L3 | L2 | L2 | L2 | L1 | L2 | L1 | L1 | L3 | L1 | L3 | L3 | L2 | L1 | L2 | L3 |
ANN34 | L1 | L3 | L1 | L2 | L3 | L2 | L3 | L1 | L2 | L2 | L3 | L1 | L2 | L2 | L1 | L3 |
ANN35 | L2 | L1 | L2 | L3 | L1 | L3 | L1 | L2 | L3 | L3 | L1 | L2 | L2 | L2 | L1 | L3 |
ANN36 | L3 | L2 | L3 | L1 | L2 | L1 | L2 | L3 | L1 | L1 | L2 | L3 | L2 | L2 | L1 | L3 |
Dataset | Number of Projects | Experiment | |
---|---|---|---|
Dataset_1 | UCP Benchmark Dataset | 50 | Training |
Dataset_2 | UCP Benchmark Dataset | 21 | Testing |
Dataset_3 | Combined | 18 | Validation1 |
Dataset_4 | Combined Industrial projects | 17 | Validation2 |
Datasets | N | Min (PM) | Max (PM) | Mean (PM) | Std. Deviation (PM) |
---|---|---|---|---|---|
Dataset_1 | 50 | 5775.0 | 7970.0 | 6506.940 | 653.0308 |
Dataset_2 | 21 | 6162.6 | 6525.3 | 6393.993 | 118.1858 |
Dataset_3 | 18 | 2692.1 | 3246.6 | 2988.392 | 233.2270 |
Dataset_4 | 17 | 2176.0 | 3216.0 | 2589.400 | 352.0859 |
No. of Iter. | 1. | 2. | 3. | 4. | ||||
---|---|---|---|---|---|---|---|---|
ANN-L16 | MRE | GA | MRE | GA | MRE | GA | MRE | GA |
ANN1 | 0.084 | 0.084 | 0.080 | 0.004 | 0.076 | 0.004 | 0.072 | 0.004 |
ANN2 | 0.196 | 0.196 | 0.112 | 0.084 | 0.082 | 0.030 | 0.073 | 0.009 |
ANN3 | 0.188 | 0.188 | 0.113 | 0.076 | 0.081 | 0.031 | 0.072 | 0.009 |
ANN4 | 0.085 | 0.085 | 0.077 | 0.008 | 0.074 | 0.003 | 0.072 | 0.003 |
ANN5 | 0.161 | 0.161 | 0.105 | 0.056 | 0.080 | 0.025 | 0.073 | 0.008 |
ANN6 | 0.069 | 0.069 | 0.068 | 0.002 | 0.067 | 0.000 | 0.067 | 0.000 |
ANN7 | 0.078 | 0.078 | 0.073 | 0.006 | 0.071 | 0.002 | 0.070 | 0.001 |
ANN8 | 0.151 | 0.151 | 0.105 | 0.046 | 0.081 | 0.024 | 0.073 | 0.008 |
ANN9 | 0.191 | 0.191 | 0.120 | 0.071 | 0.076 | 0.044 | 0.072 | 0.004 |
ANN10 | 0.073 | 0.073 | 0.080 | 0.007 | 0.074 | 0.006 | 0.073 | 0.001 |
ANN11 | 0.078 | 0.078 | 0.080 | 0.001 | 0.075 | 0.005 | 0.072 | 0.003 |
ANN12 | 0.130 | 0.130 | 0.084 | 0.047 | 0.074 | 0.010 | 0.070 | 0.003 |
ANN13 | 0.113 | 0.113 | 0.083 | 0.030 | 0.074 | 0.009 | 0.071 | 0.002 |
ANN14 | 0.094 | 0.094 | 0.080 | 0.014 | 0.072 | 0.008 | 0.071 | 0.002 |
ANN15 | 0.094 | 0.094 | 0.080 | 0.015 | 0.073 | 0.007 | 0.071 | 0.002 |
ANN16 | 0.102 | 0.102 | 0.082 | 0.021 | 0.074 | 0.008 | 0.071 | 0.002 |
GA | 16 | 10 | 5 | 0 | ||||
Winner | 6.9% | 6.8% | 6.7% | 6.7% | ||||
MMRE | 11.8% | 8.9% | 7.5% | 7.1% |
ANN-L36prim | ||||||
---|---|---|---|---|---|---|
GA | 36 | 35 | 23 | 14 | 3 | 0 |
Winner | 7.3% | 7.2% | 7.1% | 7.0% | 7.0% | 6.9% |
MMRE | 12.1% | 9.4% | 8.4% | 7.5% | 7.2% | 7.0% |
Datasets | ANN-L16 | ANN-L36prim | Part of Experiment |
---|---|---|---|
MMRE (%) | MMRE (%) | ||
Dataset_1 | 6.7 | 7.0 | Training |
7.1 | 7.1 | Testing | |
Dataset_2 | 8.0 | 7.5 | Validation1 |
Dataset_3 | 8.3 | 8.4 | Validation2 |
AVERAGE(MMRE) | 7.5 | 7.5 |
Correlation | ANN-L16 | ANN-L36prim |
---|---|---|
Pearson’s | 0.875 | 0.983 |
Spearman’s rho | 0.784 | 0.962 |
Training | ||
PRED (%) | ANN-L16 (%) | ANN-L36prim (%) |
PRED(25) | 100.0 | 100.0 |
PRED(30) | 100.0 | 100.0 |
PRED(50) | 100.0 | 100.0 |
Testing | ||
PRED(25) | 100.0 | 100.0 |
PRED(30) | 100.0 | 100.0 |
PRED(50) | 100.0 | 100.0 |
Validation1 | ||
PRED(25) | 100.0 | 100.0 |
PRED(30) | 100.0 | 100.0 |
PRED(50) | 100.0 | 100.0 |
Validation2 | ||
PRED(25) | 100.0 | 100.0 |
PRED(30) | 100.0 | 100.0 |
PRED(50) | 100.0 | 100.0 |
Dataset | MMRE | UAW | UUCW | UUCP | TCF | ECF | AUCP |
---|---|---|---|---|---|---|---|
Dataset_1 | 6.7% | 7.1% | 6.7% | 7.0% | 6.7% | 6.7% | 7.1% |
Dataset_2 | 7.0% | 7.1% | 7.0% | 7.2% | 6.9% | 7.1% | 7.2% |
Dataset_3 | 8.0% | 7.9% | 8.1% | 7.9% | 8.1% | 8.1% | 7.5% |
Dataset_4 | 8.3% | 7.9% | 8.4% | 7.9% | 8.3% | 8.2% | 8.0% |
Dataset | UUCP | g − UUCP = MMRE − UUCP | AUCP | g − AUCP = MMRE − AUCP |
---|---|---|---|---|
Dataset_1 | 6.9% | −0.1% | 6.8% | −0.3% |
Dataset_2 | 7.1% | −0.1% | 7.1% | −0.1% |
Dataset_3 | 8.0% | 0.1% | 8.0% | 0.5% |
Dataset_4 | 8.2% | 0.2% | 8.2% | 0.2% |
max | 0.2% | max | 0.5% | |
min | −0.1% | min | −0.3% |
MMRE (%) | COCOMO2000 and ANN | COSMIC FP and ANN | UCP and ANN | |||||
---|---|---|---|---|---|---|---|---|
COCOMO2000 | ANN-L9 | ANN-L18 | ANN-L27 | ANN-L36 | ANN-L12 | ANN-L36prim | ANN-L16 | ANN-L36prim |
193.1% | 72.0% | 59.7% | 45.3% | 43.3% | 29.7% | 28.8% | 7.5% | 7.5% |
MMRE (%) | COCOMO2000 and ANN | COSMIC FP and ANN | UCP and ANN | |
ANN-L36 | ANN-L36prim | ANN-L16 | ANN-L36prim | |
43.3% | 28.8% | 7.5% | 7.5% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rankovic, N.; Rankovic, D.; Ivanovic, M.; Lazic, L. A Novel UCP Model Based on Artificial Neural Networks and Orthogonal Arrays. Appl. Sci. 2021, 11, 8799. https://doi.org/10.3390/app11198799
Rankovic N, Rankovic D, Ivanovic M, Lazic L. A Novel UCP Model Based on Artificial Neural Networks and Orthogonal Arrays. Applied Sciences. 2021; 11(19):8799. https://doi.org/10.3390/app11198799
Chicago/Turabian StyleRankovic, Nevena, Dragica Rankovic, Mirjana Ivanovic, and Ljubomir Lazic. 2021. "A Novel UCP Model Based on Artificial Neural Networks and Orthogonal Arrays" Applied Sciences 11, no. 19: 8799. https://doi.org/10.3390/app11198799
APA StyleRankovic, N., Rankovic, D., Ivanovic, M., & Lazic, L. (2021). A Novel UCP Model Based on Artificial Neural Networks and Orthogonal Arrays. Applied Sciences, 11(19), 8799. https://doi.org/10.3390/app11198799