Next Article in Journal
Fuzzy Cognitive Maps: Their Role in Explainable Artificial Intelligence
Previous Article in Journal
LUCC Simulation Based on RF-CNN-LSTM-CA Model with High-Quality Seed Selection Iterative Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comprehensive MCDM-Based Approach for Object-Oriented Metrics Selection Problems

1
College of Applied Computer Science, King Saud University, Riyadh 11451, Saudi Arabia
2
Higher Institute of Finance and Taxation Sousse, University of Sousse, Sousse 4023, Tunisia
3
Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
4
Information Systems Department, King Saud University, Riyadh 11451, Saudi Arabia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(6), 3411; https://doi.org/10.3390/app13063411
Submission received: 15 February 2023 / Revised: 1 March 2023 / Accepted: 4 March 2023 / Published: 7 March 2023
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Object-oriented programming (OOP) is prone to defects that negatively impact software quality. Detecting defects early in the development process is crucial for ensuring high-quality software, reducing maintenance costs, and increasing customer satisfaction. Several studies use the object-oriented metrics to identify design flaws both at the model level and at the code level. Metrics provide a quantitative measure of code quality by analyzing specific aspects of the software, such as complexity, cohesion, coupling, and inheritance. By examining these metrics, developers can identify potential defects in OOP, such as design defects and code smells. Unfortunately, we cannot assess the quality of an object-oriented program by using a single metric. Identifying design-defect-metric-based rules in an object-oriented program can be challenging due to the number of metrics. In fact, it is difficult to determine which metrics are the most relevant for identifying design defects. Additionally, multiple thresholds for each metric indicates different levels of quality and increases the difficulty to set clear and consistent rules. Hence, the problem of object-oriented metrics selection can be ascribed to a multi-criteria decision-making (MCDM) problem. Based on the experts’ judgement, we can identify the most appropriate metric for the detection of a specific defect. This paper presents our approach to reduce the number of metrics using one of the MCDM methods. Therefore, to identify the most important detection rules, we apply the fuzzy decision-making trial and evaluation laboratory (Fuzzy DEMATEL) method. We also classify the metrics into cause-and-effect groups. The results of our proposed approach, applied on four open-source projects, compared to our previous published results, confirm the efficiency of the MCDM and especially the Fuzzy DEMATEL method in selecting the best rules to identify design flaws. We increased the defect detection accuracy by the selection of rules containing important and interrelated metrics.

1. Introduction

Software quality is a measure of how well a software product meets the needs and expectations of its users. High-quality software is free from defects and errors that could affect its functionality or performance. It is more reliable and less likely to fail or cause errors, and easier to maintain and update, reducing the risk of bugs over time.
One important issue in software engineering is to find code smells in software before its delivery. In fact, defect prediction is a technique to control the schedule and cost of a software system. Detecting and fixing OOP defects early in the development cycle can save time and money by reducing the need for expensive rework or corrective action later in the project [1].
Code smells [2,3,4] refer to any symptom in the object-oriented program that possibly hinders software maintenance and evolution. The textual description of defects or code smells is very subjective and depends on the designer/programmer interpretation. As an example, we can present the defect Feature Envy (i.e., this happens when a method is defined in a wrong class based on the class attributes and other method invocation). Depending on a subjective interpretation, each designer could decide in a different way which methods are candidates to present a Feature Envy defect. In fact, the selection of those methods is based on information such as “The number of communications with a given class“. Depending on the context, the same value could be evaluated as high, medium, or even low.
One approach that has gained popularity for detecting design defects is to use the object-oriented metrics. In [5], the authors presented an overview of 3295 papers extracted from the most popular electronic databases related to the UML model-refactoring field. Seventeen percent of the studies used OO metrics and rules-based metrics to detect design defects. Unfortunately, a single metric may not be sufficient to capture all aspects of the system’s quality, and it may lead to incomplete or inaccurate assessments of the system’s overall quality, but a combination of metrics is a powerful heuristic that can identify and standardize the way of defining and detecting code smells. The majority of works in literature focuses on combining metrics, generating a set of detection rules; they use standard object-oriented metrics as well as metrics defined in an ad hoc way [6,7,8]. The accuracy of rule-based metrics is directly affected by the selected metrics. Defining object-oriented rule-based metrics can be challenging. Not only does it require a thorough understanding of OOP concepts and their relationship with software quality attributes, but it also depends on the selection of rules with the best metrics as well as the best thresholds for each metric. The complexity of this combinatory problem is significantly reduced when the number of metrics involved for the detection rule decreases.
In fact, there is no standard set of object-oriented rule-based metrics that everyone agrees upon. This can make it difficult to compare software systems using different metrics. The problem of the metrics selection is, by excellence, a Multi-Criteria Decision-Making (MCDM) problem if we consider the huge number of possibilities and combinations between the metrics and thresholds. Researchers are facing a challenge in the selection of metrics and the definition of the respective thresholds. They have the challenge to find the most relevant set of rules-based metrics.
In this paper, we propose to use the fuzzy decision-making trial and evaluation laboratory (DEMATEL) method to determine the most influential criteria and to find out the ranking of those criteria [9,10,11]. The goal of this work is to reduce the number of metrics combinations leading to rules that are more accurate. This paper answers the question: “What is the best set of rules that can detect a specific defect?” Only rules with the most relevant metrics are considered. The results are validated on previous published work [12], and show an improvement in the detection process of four design defects (i.e., The Blob, Feature Envy, Lazy Class, and Data Class defect) based on sixteen object-oriented metrics.
The paper is structured as follows. Section 2 defines the object-oriented metrics. Section 3 presents an overview of the Fuzzy DEMATEL method. Section 4 details the findings. Section 5 is dedicated to the validation and Section 6 concludes the paper.

2. Object-Oriented Metrics

The majority of the existing works rely on rule-based metrics to detect design defects [13,14,15]. Researchers combine metrics in rules to detect a specific defect. They use object-oriented metrics to quantify the code quality and predict the occurrence of design defects. As shown in Table 1, in our experiments, we propose to use sixteen static and dynamic metrics useful for software measurement and design flow detection. The metrics are inspired by [16,17,18].

3. Fuzzy DEMATEL for Object-Oriented Metrics

The DEMATEL technique is an initiative of the Battelle Memorial Institute through the Geneva Research Centre [19]. It is a comprehensive method for illustrating the structure of complicated cause-and-effect relationships. It aims to find the critical attributes through a visual structural model [20].
We use the DEMATEL method to identify the most significant metrics that affect other metrics. It also converts the relationship between cause-and-effect metrics into a structural model. It identifies the set of cause metrics and effect metrics.
Using DEMATEL, we aim to decrease the number of metrics needed to measure the defects, which leads to improving the effectiveness of the defect detection rules using the digraph map.

3.1. Research Methodology

This research is based on Fuzzy DEMATEL, because of its suitability for evaluating expert answers. In fact, evaluating the importance of a metric is very subjective depending on each expert experience. Thus, it cannot be assessed by crisp values. The concept of fuzzy sets [21] combined with DEMATEL methods perfectly handles the vagueness of expert answers. To deal with this imprecise decision-making problem, we adopt the triangular fuzzy number identified by Akyuz and Celik [22]. This fuzzy representation is defined by a set of values (Lower, Medium, Upper). The influence of each metric is measured over a five-level fuzzy scale, as shown in Table 2.
After the selection of the set of metrics and the experts in the object-oriented programming field, the experts evaluate the effect between metrics using a pairwise comparison, and we start the process of generating and normalizing the Fuzzy Direct-Relation Matrix (FDRM) as an aggregation of all expert matrices. Then, we calculate the total-relation fuzzy matrix, and we obtain as the final step the classification of metrics based on their importance and influence. Finally, we validate our findings from our previous work [12], by refining the set of the rules-based metrics identified in [12] regarding the metrics importance and influence found using Fuzzy DEMATEL. We present in Figure 1 the main steps to apply Fuzzy DEMATEL.

3.2. Fuzzy Direct-Relation Matrix

Each expert generates a direct-relation matrix with a pairwise comparison Me, where e represents the expert, M is a (n × n) non-negative matrix, and M(e, i, j) represents the direct impact of the Metric i on the Metric j for the expert e. When i = j, the diagonal elements M(e, i, j) = 0. In Table 3, we present an example of the linguistic scores of one expert evaluation for the Blob anti-pattern.
Based on Table 2, the fuzzy linguistic matrix is then converted into a fuzzy scaled direct-relation matrix M. Table 4 presents the aggregated fuzzy direct-relation matrix collected from the different experts’ judgments.

3.3. Normalized Fuzzy Direct-Relation Matrix

The first step is the defuzzification of the normalized fuzzy direct-relation matrix that is based on the Best Non-fuzzy Performance (BPN) method [23]. It is a technique used to generate crisp values from fuzzy values. The BNP of a triangular fuzzy number N (Lower, Medium, Upper) can be expressed as:
B P N = L o w e r + ( U p p e r L o w e r ) + ( M e d i u m L o w e r )       3    
Table 5 presents the crisp direct-relation matrix (CM). Using the Formula (2), we transformed the CM into a normalized direct-relation matrix (NMR) as shown in Table 6. Considering the initial n × n matrix (CM), aij is denoted as the degree to which the criterion i affects the criterion j.
NMR = CM max j = 1 n a i j         1 i n  

3.4. Total-Relation Fuzzy Matrix

At this level, we generate the total-relation matrix (TRM) as shown in Table 7, using the Formula (3).
T R M = N M R ( I N M R ) 1  
where I is denoted as the identity matrix.

3.5. Metrics of Cause and Effect Matrix

As presented in Table 8, the value of (D + R) represents the importance of the metric in the rule detection. The higher the value of (D + R), the more important the metric. Therefore, it should be included in the rule generation process. The value of (D − R) classifies the metrics into cause-and-effect metrics. D represents the sum of the rows and R represents the sum of the columns.
Using the Formulae (4) and (5), let TMR = (TMRij), i, j ∈{1,2,...,n).
D i = j = 1 n T M R i j                  
R i = i = 1 n T M R i j          
In Figure 2, we present the causal diagram. The horizontal axis represents (D + R) and the vertical axis represents (D − R). If we consider the values (D + R), it appears that some metrics are more important than others. We can split the set of metrics into three main groups, labeled Low Importance, Important, and High importance.

4. Results of Fuzzy DEMATEL Method for Object-Oriented Metrics

Table 9 presents the groups identified in the metrics causal diagram. We classified the set of metrics into cause metrics and effect metrics. In each group, metrics have importance levels (Low, Normal, and High). For example, in the cause group, the highest metrics are NIC and CM, and for the effect group, the highest metrics are ATFD, NOM NOC, and NCC.
Cause metrics indicate the implication of the influencing metrics on the effect metrics. Considering the interdependence among the metrics, the detection rules should consider both the cause metrics and the related influence on the effect metrics [24]. Therefore, by selecting rules combining cause and effect metrics, we can improve the accuracy of design defect detection, giving priority to the metrics having the highest importance.
Based on the DEMATEL method, we classified the metrics into two orthogonal dimensions. The horizontal dimension represents the metrics importance and the vertical dimension represents the cause and effect metrics. Now, we can limit the detection rule; for example, we can exclude rules with low-importance metrics from the detection process. Based on Table 9, it becomes clear that detection rules should contain as many metrics as possible from the “important” and “highly important” horizontal dimension. These rules should also combine metrics from the vertical dimension (i.e., both cause and effect metrics). Based on this finding, an excellent detection rule combines metrics such as NIC or/and CM with metrics such as ATFD or/and NOM or/and NOC or/and NCC.
The following section explores in depth the impact of the above finding on the accuracy of design defect detection.

5. Validation

The findings presented in the previous section are very important in the selection of metrics. In order to validate our results, we refer to our previous study in [12], where we used the decision tree algorithm to generate defect rules. Applying our finding to our previous work by refining the set of rules-based metrics and comparing the results with the results in [12] shows how could Fuzzy DEMATEL improves the process of identifying the best set of rules-based metric.

5.1. Reference Study

The first study we conducted [12] represents the reference study to validate our findings. We experimented in four design defects: The Blob, Data Class (DC), Lazy Class (LC), and Feature Envy (FE) defects, using 15 object-oriented metrics:
The Blob anti-pattern or God class [25] corresponds to a large controller class that depends on data stored in other classes. This is typically the case for large classes declaring many fields and methods and resulting in a low cohesion.
A Data Class bad smell [24] corresponds to a class that stores data passively. This class contains data and no methods to operate on that data.
A Lazy Class bad smell corresponds to a class that is not doing enough to pay for itself. There is no need for additional classes that could increase the project complexity.
A Feature Envy bad smell corresponds to a method that uses another class excessively. It should belong to that class.
We tested five open-source projects: Xerces v2.7, ArgoUML 0.19.8, Lucene 1.4, Log4j 1.2.1, and GanttProject v1.10.2. Table 10 summarizes the characteristics of the five projects.

5.2. Validation Methodology

In [12], the main objective of the study was to extract rules based on the decision tree algorithm. The rules are of the form:
For a defect D: “IF metric1 is higher/lower than threshold1 AND metric2 higher/lower than threshold2…. AND metricn higher/lower than thresholdn THEN defect D is suspected”. As an example, we present three rules, R1, R2, and R3, generated for the detection of the Data Class defect:
R1: IF ATFD <= 16.5 THEN Data class = Yes
R2: IF ATFD > 16.5 AND NOA > 11.25 THEN Data class = Yes
R3: IF ATFD > 16.5 AND NOA <= 11.25 AND NC > 251 THEN Data class = Yes.
R1 and R2 combine a few numbers of metrics resulting in the generation of a huge number of suspect classes. Therefore, R3 is the most appropriate rule to be considered as an illustrative example. In [12], we considered that the number of metrics is the only criteria that matters for the rule selection. In fact, the process of selecting rules was based on a parameter N fixed by the tester. This parameter is estimated through a series of tests on the base of examples. It represents the number of metrics in rule detection. The value of N directly affects the accuracy of the detection. A small value will generate a high number of false positives due to over-detection; we will detect more than the real existent defects. However, we get the opposite result for a high N value; it will generate a high number of false negatives and we will detect a very small number of defects compared to the existing one.
Based on Fuzzy DEMTEL and the results shown in Table 9, we follow an alternative approach. For instance, R1 contains only one metric from the effect group, so it cannot be considered as an important rule. The decision of not considering the rule R1 as important is not based on the number of metrics but on the fact that rule should combine both cause and effect metrics. For the rules R2 and R3, they combine cause and effect important and highly important metrics. As matter of fact, the rules R2 and R3 are important rules that DEMTEL suggests to include in the evaluation process.
This experiment is based on the same set of rules generated by the decision tree algorithm proposed in [12]. In this paper, we select rules based on the metrics importance presented in Table 9, instead of selecting rules based on the N parameter as presented in [12].
The selected rules for defect detection give priority to the rules that first contain both cause and effect metrics. We start by selecting rules including first the important and high important metrics. If we get a small set of rules, we include those containing only effect or cause metrics, but we choose rules with important and high important metrics.
We use the precision and recall measurements to validate our findings. To evaluate the correctness of the approach, we calculate the precision. It represents the fraction of the true design defects among the set of all detected defects (6). The precision measures the number of true identified defects. The fraction of correctly detected design defects over the set of expected defects is the recall (7).
Precision = Detected   defects Expected   defects Detected   defects
Recall = Detected   defects Expected   defects Expected   defects

5.3. Fuzzy DEMATEL for Object-Oriented Metrics Results Discussion and Validation

The first step in the validation process of the Fuzzy DEMATEL approach is to select the set of rules for each defect. In fact, we use the same rules generated in the reference study [12], based on the projects PMD 5.4.3 (with 433 classes) and Nutch 1.12 (with 247 classes). However, we introduce two formulae to compare the method of selecting the rules used in the reference study (i.e., based on the parameter N) and the new proposed one (i.e., based on Fuzzy DEMATEL).
The first, Formula (8), represents the ratio of the number of rules in the new selected set divided by the number of rules in the original set.
The second, Formula (9), represents the similarity between the two sets of rules: the set of rules identified using the parameter N and the set of rules identified using the Fuzzy DEMATEL approach.
Ratio   of   Rules = R 1 R 0
Rules   Similarity = R 0 R 1 Min ( R 0 , R 1 )
where:
R0 represents the original set of rules selected based on the parameter N;
R1 represents the set of rules selected based on metrics importance (Fuzzy DEMATEL).
As presented in Table 11, based on Fuzzy DEMATEL, we reduced about a half the number of rules generated in [12]. The degree of similarity varies depending on the defects. For example, the DC defect detection is based on 0.462 rules that are similar to the rules used in [12], it means that we selected over than 50% of new rules comparing to [12]. For LC, the degree of similarity is 0.833, which means that we almost use the same rules as those selected based on the N parameter, but we reduced the number of rules, eliminating the non-useful rules.
In Table 12, we present the detailed detection results by project, and of each defect. In fact, we significantly increased the precision and recall. The F1 score is higher when using rules selected based on the metrics cause and effects. We can identify clearly through Figure 3 that the accuracy of the detection is improved using rules selected based on metrics importance.
However, we notice that, only for the LC defect, the improvement in the accuracy was due to the improvement in the precision. In fact, in Figure 3, there is no big difference in the recall curve for both selection methods. This is normal; as we can see in Table 11, the rules similarity is very high. It means that we almost used the same rule, which implicates a similar detection rate. However, the ratio R0/R1 shows that we used approximatively only 0.6% of the rules compared to the set of rules selected based on the N parameter. The selection of rules based on their importance reduced the set of rules by 0.4%. This has a direct impact on the precision; in fact, we use less rules, minimizing over-detection, and we decrease the number of false positive detections.

6. Conclusions

Object-oriented metrics offer quantitative measures of object-oriented software. It can be a valuable tool for assessing its quality. By using these metrics, developers can identify areas for improvement, and refactoring opportunities to optimize the software’s performance, maintainability, and other important factors. However, developers need to carefully choose the right metrics, combine them in rules, and apply the rules consistently to identify code defects. This is challenging task due to the number of metrics and the multiple thresholds for each one.
In this paper, we propose to apply a fuzzy multi-criteria decision-making approach, Fuzzy DEMATEL, to identify and select the most important object-oriented metrics that can enable the detection of design defects. Compared to the findings of our previous study in [12], the results of the current work show that the new set of rules selected based on Fuzzy DEMATEL improves the defect detection accuracy. We are convinced that the metrics importance identified in this work is useful for the entire community of researchers. In fact, generating rules based on only important and highly important metrics reduces the number of metrics combinations and consequently the number of rules, and improves the design defect detection process. This work represents the first step for software refactoring. In future work, we will go a step further; after detecting the defects, we have to correct them based on a set of refactoring rules.

Author Contributions

Conceptualization, M.M.; methodology, M.M., S.A. (Sultan Alyahya), S.A.-O. and S.A. (Sarra Ayouni); software, M.M.; validation, M.M., S.A. (Sultan Alyahya), S.A.-O. and S.A. (Sarra Ayouni); formal analysis, M.M.; investigation, M.M.; resources, F.H.; data curation, S.A. (Sultan Alyahya); writing—original draft preparation, M.M. and S.A. (Sultan Alyahya); writing—review and editing, M.M. and S.A.-O.; visualization, M.M. and S.A. (Sultan Alyahya); supervision, S.A. (Sultan Alyahya) and S.A. (Sarra Ayouni); project administration, S.A. (Sarra Ayouni); funding acquisition, S.A.-O. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia, Project Number PNURSP2023R136.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included within the article.

Acknowledgments

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R136), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Freire, S.; Passos, A.; Mendonça, M.; Sant’Anna, C.; Spínola, R.O. On the Influence of UML Class Diagrams Refactoring on Code Debt: A Family of Replicated Empirical Studies. In Proceedings of the Euromicro Conference on Software Engineering and Advanced Applications, Virtual, 26–28 August 2020. [Google Scholar]
  2. Zhang, M.; Hall, T.; Baddoo, N. Code Bad Smells: A review of current knowledge. J. Softw. Maint. Evol. Res. Pract. 2010, 23, 179–202. [Google Scholar] [CrossRef]
  3. LewowskiLech, T.; Madeyski, M. How far are we from reproducible research on code smell detection? A systematic literature review. Inf. Softw. Technol. 2022, 144, 106783. [Google Scholar] [CrossRef]
  4. Amandeep, K.; Sushma, J.; Shivani, G.; Gaurav, D. A Review on Machine-learning Based Code Smell Detection Techniques in Object-oriented Software System(s). Recent Adv. Electr. Electron. Eng. 2021, 14, 290–303. [Google Scholar]
  5. Misbhauddin, M.; Alshayeb, M. UML model refactoring: A systematic literature review. Empir. Softw. Eng. 2013, 20, 206–251. [Google Scholar] [CrossRef]
  6. Di Nucci, D.; Palomba, F.; Tamburri, D.; Serebrenik, A.; De Lucia, A. Detecting Code Smells using Machine Learning Techniques: Are We There Yet? In Proceedings of the 25th IEEE International Conference on Software Analysis, Evolution, and Reengineering, Campobasso, Italy, 20–23 March 2018. [Google Scholar]
  7. Lanza, M.; Marinescu, R. Object-Oriented Metrics in Practice; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  8. Fernandes, E.; Oliveira, J.; Paiva, V.G.; Figueiredo, E. A Review-based Comparative Study of Bad Smell Detection Tools. In Proceedings of the 20th International Conference on Evaluation and Assessment in Software Engineering (EASE), Limerick, Ireland, 1–3 June 2006. [Google Scholar]
  9. Alamoodi, A.; Albahri, O.; Zaidan, A.; Alsattar, H.; Zaidan, B.; Albahri, A. Hospital Selection Framework for Remote MCD Patients Based on Fuzzy Q-Rung Orthopair Environment. Neural Comput. Appl. 2023, 35, 6185–6196. [Google Scholar] [CrossRef] [PubMed]
  10. Ayouni, S.; Laila, J.; Hajjej, F.; Maddeh, M.; Al-Otaibi, S. Fuzzy Vikor Application for Learning Management Systems Evaluation in Higher Education. Int. J. Inf. Commun. Technol. Educ. 2021, 17, 19. [Google Scholar] [CrossRef]
  11. Ayouni, S.; Laila, J.; Hajjej, F.; Maddeh, M. A Hybrid Fuzzy DEMATEL-AHP/VIKOR Method for LMS Selection. In Proceedings of the European Conference on e-Learning, Kidmore End, Copenhagen, Denmark, 7–8 November 2019. [Google Scholar]
  12. Maddeh, M.; Ayouni, S.; Alyahya, S.; Hajjej, F. Decision tree-based Design Defects Detection. IEEE Access 2021, 9, 71606–71614. [Google Scholar] [CrossRef]
  13. Boczar, B.; Pytka, M.; Madeyski, L. Which Static Code Metrics Can Help to Predict Test Case Effectiveness? New Metrics and Their Empirical Evaluation on Projects Assessed for Industrial Relevance. Dev. Inf. Knowl. Manag. Bus. Appl. 2022, 3, 201–215. [Google Scholar]
  14. Bhatia, M.K. A Survey of Static and Dynamic Metrics Tools for Object Oriented Environment, Emerging Research in Computing, Information, Communication and Applications; Springer: Singapore, 2021; Volume 790, pp. 521–530. [Google Scholar]
  15. Badri, S.; Moudache, M. Using Metrics for Risk Prediction in Object-Oriented. J. Softw. 2022, 17, 1–20. [Google Scholar]
  16. Van, P.; Chris, L.; Kathryn, K. A Better Set of Object-Oriented Design Metrics for Within-Project Defect Prediction. In Proceedings of the Evaluation and Assessment in Software Engineering, Trondheim, Norway, 15–17 April 2020. [Google Scholar]
  17. Erni, K.; Lewerentz, C. Applying Design Metrics to Object-Oriented Frameworks. In Proceedings of the 3rd International Software Metrics Symposium, Berlin, Germany, 25–26 March 1996; pp. 64–74. [Google Scholar]
  18. Amjad, A.; Alshayeb, M. A Metrics Suite for UML Model Stability. Softw. Syst. Model. 2019, 18, 557–583. [Google Scholar]
  19. Gabus, A.; Fontela, E. World Problems, An Invitation to Further Thought within the Framework of DEMATEL; Battelle Geneva Research Centre: Geneva, Switzerland, 1972. [Google Scholar]
  20. Si, S.; You, X.; Liu, H.; Zhang, P. DEMATEL Technique: A Systematic Review of the State-of-the-Art Literature on Methodologies and Applications. Math. Probl. Eng. 2018, 2018, 3696457. [Google Scholar] [CrossRef] [Green Version]
  21. Zadeh, L.A. Fuzzy Sets. Information and Control. J. Symb. Log. 1965, 38, 338–353. [Google Scholar]
  22. Akyuza, E.; Celik, E. A Fuzzy DEMATEL Method to Evaluate Critical Operational Hazards During Gas Freeing Process in Crude Oil Tankers. J. Loss Prev. Process Ind. 2015, 38, 243–253. [Google Scholar] [CrossRef]
  23. Ross, T. Fuzzy Logic with Engineering Applications. MCGRAW-HILL: New York, NY, USA, 1995. [Google Scholar]
  24. Fontela, E.; Gabus, A. The Dematel Observer; Battelle Geneva Research Center: Geneva, Switzerland, 1976. [Google Scholar]
  25. Malveau, R.; Brown, W.J.; McCormick, H.; Mowbray, T. AntiPatterns: Refactoring Software, Architecture and Projects in Crisis; John Wiley & Sons: Hoboken, NJ, USA, 1998. [Google Scholar]
Figure 1. Fuzzy DEMATEL method.
Figure 1. Fuzzy DEMATEL method.
Applsci 13 03411 g001
Figure 2. Causal diagram metrics.
Figure 2. Causal diagram metrics.
Applsci 13 03411 g002
Figure 3. Variation of recall and precision depending on the selection method.
Figure 3. Variation of recall and precision depending on the selection method.
Applsci 13 03411 g003
Table 1. Object-oriented metrics.
Table 1. Object-oriented metrics.
MetricsDescriptionRepresentation
NCNumber of classesC1
PSPackage sizeC2
NOANumber of attributesC3
NOMNumber of methods C4
NODNumber of descendent classes (Inheritance)C5
NODDNumber of direct descendent classes (Direct inheritance)C6
NMSCNumber of messages send from a class to itself (Internal-messages)C7
NOCNumber of messages sent for other classesC8
NCCNumber of classes affected by the measured class C9
ATFDAccess To foreign dataC10
NOPNumber of parametersC11
NICNumber of interconnected classes/Number of classes affected by the measured methodC12
CMNumber of methods affected by the measured methodC13
NOPMNumber of packages in the modelC14
PUCPercentage of used classes/Number of classes used outside the measured packageC15
NMRNumber of messages received by a classC16
Table 2. Fuzzy linguistic scale.
Table 2. Fuzzy linguistic scale.
Linguistic TermsTriangular Fuzzy Numbers
No influence (NO)(0, 0, 0.25)
Very low influence (VL)(0, 0.25, 0.5)
Low influence (L)(0.25, 0.5, 0.75)
High influence (H)(0.5, 0.75, 1)
Very high influence (VH)(0.75, 1, 1)
Table 3. Linguistic evaluation of criteria interdependence.
Table 3. Linguistic evaluation of criteria interdependence.
C1C2C3C4C5C6C7C8C9C10C11C12C13C14C15C16
C1:NC0VHHVHHHNOVHVHVHNOHHVHVHH
C2:PSNO0NONONONONONONONONONONOVHVHVL
C3:NOAVLVL0VHNONOHVHVHVHVLHHNONONO
C4:NOMVHHL0LLVHVHVHVHVHVHHLHH
C5:NODLLVHVH0VHHLLVLNOVLLNOVLL
C6:NODDLLVHVHVH0HVLVLVLNOLLNOVLL
C7:NMSCNONOHNONONO0VHVHVHNOHHNOHH
C8:NOCNONOHHNONOH0VHVHNOHVHNOVHVH
C9:NCCLVLNOLVLVLVHVH0VHNOVHHNOHVH
C10:ATFDVLVLVLHVLVLVHVHVH0VLHHNOVHVH
C11:NOPNONOHLNONOLLLVL0HHNONONO
C12:NICLHHHVLVLVHVHVHVHVL0HVLHVH
C13:CMLLVLLNONOLVHVHVHNOH0NOHVH
C14:NOPMVLVHNONONONONONONONONONONO0HNO
C15:PUCHVHNOLNONONONONONONONONOVL0VL
C16:NMRNONONOLNONOVHVHVHVHNOLHLL0
Table 4. The aggregated fuzzy direct-relation matrix.
Table 4. The aggregated fuzzy direct-relation matrix.
C1C2C3C15C16
C1:NC0000.690.9410.590.841 0.690.9410.590.841
C2:PS0.060.220.470000.060.220.470.690.94100.090.34
C3:NOA00.090.3400.090.340000.060.220.470.090.250.5
C4:NOM0.280.410.560.190.280.530.160.410.660.280.530.780.280.530.78
……
C14:NOPM00.130.380.690.9410.060.220.470.590.841000.25
C15:PUC0.590.8410.690.941000.250000.060.310.56
C16:NMR00.030.28000.250.060.220.470.470.720.91000
Table 5. The crisp direct-relation matrix.
Table 5. The crisp direct-relation matrix.
C1C2C3C4C5C13C14C15C16
C1:NC00.8750.81250.8750.8125 0.53130.8750.8750.8125
C2:PS0.2500.250.250.0833 0.08330.8750.8750.1458
C3:NOA0.14580.145800.85420.5417 0.79170.08330.250.2813
…..
C4:NOM0.41670.33330.406300.4063 0.79170.26040.53130.5313
C14:NOPM0.16670.8750.250.250.0833 0.083300.81250.0833
C15:PUC0.81250.8750.08330.23960.1042 0.250.145800.3125
C16:NMR0.10420.08330.250.40630.0833 0.53130.43750.69790
Table 6. The normalized direct-relation matrix.
Table 6. The normalized direct-relation matrix.
C1C2C3C4C5C13C14C15C16
C1:NC00.07580.07040.07580.0704 0.0460.07580.07580.0704
C2:PS0.021700.02170.02170.00720.00720.07580.07580.0126
C3:NOA0.01260.012600.0740.04690.06860.00720.02170.0244
C4:NOM0.03610.02890.035200.03520.06860.02260.0460.046
……..
C14:NOPM0.01440.07580.02170.02170.00720.007200.07040.0072
C15:PUC0.07040.07580.00720.02080.0090.02170.012600.0271
C16:NMR0.0090.00720.02170.03520.00720.0460.03790.06050
Table 7. The total-relation matrix.
Table 7. The total-relation matrix.
C1C2C3C9C12C13C14C15C16
C1:NC0.05860.13920.15170.1943 0.16990.15350.11940.20110.1762
C2:PS0.04330.02890.0470.0602 0.05430.04180.09020.11760.0473
C3:NOA0.05370.05540.06190.1616 0.12470.1490.03640.11460.1088
C4:NOM0.08040.0780.10130.1725 0.15750.15590.05670.14820.1369
…….
C14:NOPM0.03460.09690.04470.0565 0.03850.03840.0180.10850.039
C15:PUC0.09090.10180.04110.0702 0.06360.0630.03730.05490.0704
C16:NMR0.04560.04760.06980.1492 0.10450.11510.06180.14110.0728
Table 8. The metrics influence received and given.
Table 8. The metrics influence received and given.
DRD + RD − R
C1:NC2.3950.9183.3131.477
C2:PS0.8171.0641.881−0.247
C3:NOA1.6991.3763.0740.323
C4:NOM1.8881.8973.785−0.008
C5:NOD1.0660.8771.9430.190
C6:NODD1.0770.8201.8970.257
C7:NMSC1.2941.7693.062−0.475
C8:NOC1.8931.9073.800−0.014
C9:NCC1.6281.9673.595−0.339
C10:ATFD1.6661.9543.620−0.288
C11:NOP0.9200.5771.4970.342
C12:NIC1.8901.6953.5850.195
C13:CM1.7641.7553.5200.009
C14:NOPM0.7440.7531.497−0.008
C15:PUC0.9452.0512.996−1.106
C16:NMR1.4491.7563.205−0.308
Table 9. Metrics importance.
Table 9. Metrics importance.
Metrics
GroupsLow ImportanceImportantHigh Importance
Cause GroupNOPNCNIC
NODDNOACM
NOD
Effect GroupNOPMNMRATFD
PSNMSCNOM
PUCNOC
NCC
Table 10. Project information.
Table 10. Project information.
Number of ClassesNumber of BlobNumber of LCNumber of DCNumber of FE
Xerces v2.768329231758
ArgoUML 0.19.8124415291578
Lucene 1.418978223
Log4j 1.2.1209116533
GanttProject v1.10.224122121046
Table 11. The generated rules.
Table 11. The generated rules.
R0 (N Parameter)R1 (Metrics Importance)R0 ∩ R1R1/R0Rules Similarity
Blob3117100.5480.588
DC241360.5420.462
LC1912100.6320.833
FE221470.6360.500
Table 12. Detection results.
Table 12. Detection results.
ProjectDefectRule Selection Based on NRule Selection based on Metrics Importance
Recall %Precision %F1 ScoreRecall %Precision %F1 Score
Xerces v2.7Blob556459.16668875.43
LC475751.52499063.45
DC526858.93717070.50
FE515954.71535955.84
ArgoUML 0.19.8Blob605657.93718878.59
LC516557.16536558.39
DC735361.41847780.35
FE386247.12626764.40
Lucene 1.4Blob1008088.891008692.47
LC1005570.971009396.37
DC1007183.041009195.29
FE1006276.541009697.96
Log4j 1.2.1Blob1004763.951007887.64
LC1004965.771008994.18
DC1005268.421007988.27
FE1005671.791009496.91
GanttProject v1.10.2Blob546960.59757775.99
LC754455.46756368.48
DC904963.45908285.81
FE585455.93627668.29
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Maddeh, M.; Al-Otaibi, S.; Alyahya, S.; Hajjej, F.; Ayouni, S. A Comprehensive MCDM-Based Approach for Object-Oriented Metrics Selection Problems. Appl. Sci. 2023, 13, 3411. https://doi.org/10.3390/app13063411

AMA Style

Maddeh M, Al-Otaibi S, Alyahya S, Hajjej F, Ayouni S. A Comprehensive MCDM-Based Approach for Object-Oriented Metrics Selection Problems. Applied Sciences. 2023; 13(6):3411. https://doi.org/10.3390/app13063411

Chicago/Turabian Style

Maddeh, Mohamed, Shaha Al-Otaibi, Sultan Alyahya, Fahima Hajjej, and Sarra Ayouni. 2023. "A Comprehensive MCDM-Based Approach for Object-Oriented Metrics Selection Problems" Applied Sciences 13, no. 6: 3411. https://doi.org/10.3390/app13063411

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop