**A Multicriteria Decision Aid-Based Model for Measuring the E**ffi**ciency of Business-Friendly Cities**

**Mihailo Jovanovi´c <sup>1</sup> , Slobodan Nedeljkovi´c <sup>2</sup> , Milan Randelovi´c ¯ 3 , Gordana Savi´c <sup>4</sup> , Vladica Stojanovi´c <sup>5</sup> , Vladimir Stojanovi´c <sup>6</sup> and Dragan Randelovi´c ¯ 7,\***


Received: 26 May 2020; Accepted: 14 June 2020; Published: 17 June 2020

**Abstract:** Local self-government has the task of enabling stable economic development, in addition to enabling a normal quality of life for citizens. This is why the state government should provide guidelines that will improve the local business climate, and by doing so enable local economic development. This can be done through the introduction of a business-friendly certification procedure, which is influenced by uncertain inputs and influences many output factors. Each local government has the important task of determining its rank of efficiency in this process. A number of methodologies developed to solve this problem are generally divided into two groups: Parametric and non-parametric. These two groups of methodologies could provide quite different results. Therefore, the purpose of this paper was to create a model using both approaches to achieve a balanced symmetrical approach that produces better results than each approach individually. For this purpose, the paper describes a multicriteria decision aid-based model of optimization to evaluate the effectiveness of this process, integrating classification, data envelopment analysis, and stochastic frontier analysis, as well as its application in a case study of business-friendly certification in the Republic of Serbia.

**Keywords:** MCDA; efficiency; DEA; SFA; classification; dimensionality reduction

### **1. Introduction**

The constant monitoring and quantification of the effects of a job in modern society is a necessary element of its successful implementation, no matter what type of process it is and from which field of human activity it originates from. Logistics processes are crucial for achieving this, and the basic indicator is the definition of the relationship between the results achieved and the resources invested, which is called efficiency [1]. Measuring and increasing efficiency is a necessary prerequisite for the implementation of efficient logistics systems, which is why it is a significant scientific discipline represented in the world literature and practice [2–4].

Depending on the criteria defined, efficiency in logistics can be divided in different ways. In this respect, it is possible to distinguish between logistics efficiency at the strategic, tactical, and operational levels. Depending on the type of indicators used to describe efficiency, several types of efficiency emerge. When it comes to logistics efficiency, the most commonly referred to is operational efficiency or operations efficiency. In addition to operational efficiency, the most commonly encountered are cost (financial) efficiency, environmental or eco-efficiency, energy efficiency, qualitative efficiency, city efficiency, etc. From the type of logistics system and process point of view, as one of the most important criteria, it is possible to distinguish several types of efficiency: Distributive efficiency, transport efficiency, warehouse efficiency, efficiency of the picking process, efficiency of the inventory management process, etc. [5].

The main problem of monitoring efficiency in practice is the misunderstanding and use of partial indicators that often, for example, do not represent an appropriate measure of efficiency. In most logistics systems, the emphasis is on costs and financial indicators that do not provide sufficient information on efficiency, so corrective action is taken to determine appropriate indicators. This problem can be solved both through the use of expert knowledge and the discovery of knowledge from data by the use of appropriate artificial intelligence methods, one of which is a well-known classification, which can provide the selection of essential criteria and thus optimize the procedure to ensure its better performance.

For the purpose of measuring the efficiency of a process, four methodologies are available [6]: Econometric average response estimation, index numbers, data wrapping analysis (for example, data envelopment analysis, DEA), and stochastic boundary analysis (for example, stochastic frontier analysis, SFA). Today, with the advent of artificial intelligence, machine learning methods can also be used to measure efficiency.

All the above-mentioned methodologies belong to a broad group of so-called multi-criteria decision aid (MCDA) methodologies [7,8], which primarily address the solution of four underlying multi-criteria decision problems: Problem description, the choice of the best alternative, the ranking of alternatives, and the classification of alternatives [9]. Therefore, we aimed to address two main research questions in this paper: 1. Is it possible to successfully integrate the efficiency evaluation parametric of non-parametric methods with machine learning? 2. Does problem dimensionality reduction by a machine learning method have an effect on the quality of the efficiency evaluation results?

We propose the integration of the efficiency evaluation methods DEA and SFA, with a machine learning method, classification into the procedure of efficiency measurement, which we have not encountered in the literature. Namely, it is generally known and evident from the content of this introduction and literature review that the most commonly used methods for determining different types of efficiency are the non-parametric DEA method and the parametric SFA method. Symmetry in the approach of using those two methods, and their complementary advantages and disadvantages were the starting points for their simultaneous application. The integration of DEA and SFA with classification methods is done while having in mind the possibilities of the dimensionality reduction, which might lead to more suitable results being obtained.

The aforementioned new procedure was applied for solving the univariate problem of determining the efficiency of the business-friendly certification (BFC) process of local governments in the Republic of Serbia, considering the amount of investment per capita as an output performance indicator. The proposed model for determining the process efficiency of BFC cities determines not only the competitiveness of local self-governments in attracting direct investment as an essential precondition for competitiveness in the market but also improves the efficiency of the planning of their local economic development (LED) [10]. The efficiency determination of the BFC process incorporates the effects of the selected criteria and their individual importance, defined by the appropriate professional organization in the Republic of Serbia, the National Alliance for Local Economic Development (NALED) [11]. This process belongs to a group of problems with the input factors burdened with uncertainty, imprecision, and subjective influence. The fulfillment of conditions related to the established criteria is a prerequisite for obtaining the certificate. The efficiency of the BFC process can be evaluated by different performance, such as the aforementioned average investment per capita, the number of new

employees, the average salary of employees, etc. In the general case, solving the described real-world problem leads a complex multivariate problem being solved. Consequently, we decided to deal with a specific but very indicative output indicator: The average amount of investment per capita.

In order to present the subject and achieve the set goal, the paper is organized into the following chapters: In addition to this introduction, the second chapter is an overview of the published studies related to the subject of this work; the third chapter describes the BFC process; the fourth chapter, in the three subchapters of the DEA, classification, and SFA method, describes three methods whose integration addresses the problem of process efficiency evaluation; The fifth chapter is organized in two subchapters which discusses motivation and integration of the DEA, classification and SFA method. Sixth chapter describes the new proposed integrated method in two subchapters, through the case study and seventh chapter is discussion of obtained results. The eight chapter provides the conclusion. In the end, the references chapter provides the literature used in the paper.

### **2. Literature Review**

The multi-criteria decision aid-based model for measuring efficiency as well as the methods used in it are represented in the world literature primarily because of the importance of problem solving and its global representation in many areas of human life.

As we outlined in the introduction, the most commonly used methodologies for determining efficiency are DEA and SFA, and they all belong to a broad group of so-called MCDA methodologies. The papers [12,13] discuss DEA as an MCDA methodology and [14] discusses SFA as an MCDA methodology. The authors of [15] provide evidence of DEA and SFA as MCDA methodology, and [16] discusses two MCDA classifiers. MCDA has been applied in many different fields of human activities [17], which can be found in the literature, such as healthcare [18,19], finance and banking [20], environmental protection [21], construction and manufacturing [22], computer science [23], tourism [24], emergency management [25], logistics [26], electricity supply [27] and others [28,29].

When it comes to efficiency and its measurement, there are widespread applications in the same areas of human activity, such as healthcare [30], finance and banking [31], environmental conservation [32], construction and manufacturing [33], computer science and robotics [34], tourism [35], emergency management [36], logistics [37], electricity supply [38] and others [39,40].

As far as the BFC process is concerned, it is not specific for the Republic of Serbia, in which, as we have already stated in the introduction, it is implemented by NALED. It should be noted that more than 90 local governments from Bosnia and Herzegovina, Montenegro, Croatia, Macedonia, and Serbia are improving their business environment by up to 70% through the BFC South-East Europe (SEE) certification program [41].

Specifically, with the support of the German Agency for International Cooperation (GIZ) and the Open Regional Fund for the Modernization of Municipal Services, the BFC SEE program was launched and implemented through the regional network of institutions in order to establish a unique standard of business environment quality for the SEE local self-governments. The regional network brings together various governmental and non-governmental institutions [11,41].

The literature on the evaluation of LED in the Republic of Serbia can be found in [42,43] and on the NALED BFC process in the Republic of Serbia in [44].

Different aggregations of individual MCDA methodologies for assessing the importance of individual criteria have been discussed in [45,46]. When it comes to evaluating the efficiency of local governments, classical approaches, such as parametric DEA or non-parametric SFA methods, are most commonly used. The parametric SFA method was used for the analysis of the efficiency expenditure indicators in the Republic of Serbia's local governments [47]. The conclusion was that local self-governments could not effectively resolve issues, such as demographic and socioeconomic constraints. The efficiency of municipalities in Portugal was evaluated in two phases: SFA and Tobit regression [48]. DEA has been also used for the regions efficiency evaluation [49,50]. The comparative analysis of the cost-effectiveness of Belgian local governments was performed by FDH, DEA, and

econometric approaches in [51] while authors in [52] dealt with the public sector's efficiency in German municipalities. Another technical efficiency evaluation of major Italian municipalities by the DEA method can be found in [53]. Furthermore, a cost efficiency evaluation of Australian local governments was conducted by using mathematical programming and econometric approaches [54] while DEA was also used in the case of South African local government's efficiency measurement [55].

In his book, Rao [56] provides a comprehensive description of the DEA methodological approach and Cooper, Seiford, and Zhu provide an overview of the DEA in their articles [57–59]. The DEA methodology was also used to create a meaningful multiple criteria decision-making platform that was used for evaluating the performance of engineering schools, but in [60], a user-written data envelopment analysis command for the Stata software tool is presented. In the literature, different integrations of DEA with other MCDA methods also exist, such as for, example, with the analytical hierarchy process (AHP) [61,62]. An overview of the SFA methodology can be found in [63–65]. User-written SFA commands for the Stata software tool are given in [66]. SFA has been integrated with other MCDA methods, such as TOPSIS, in order to obtain a method with better characteristics [67].

We should note that in the literature review, there are many attempts to integrate SFA and DEA methods [68–71], as well as integrate them with some other methods from the MCDA group [72,73], especially in order to obtain better quality methods for evaluating the efficiency of different processes. Finally, it should be noted that there are a number of SFA and DEA integrations with data mining methods in the literature [74–77].

### **3. BFC Process E**ffi**ciency**

Considering today's level of development of human society, it can be said that when it comes to the city, or local self-government, two of the main functions are service and production. The product group of functions can be classified as crafts, industry, construction, etc. while the service group should cover all service activities that take place in the city or local government area. Of great importance for the development of the city are the so-called basic functions, which include the functions of those services and production activities directly used by the population inside and outside the city, including the population of the wider local government, including the functions of providing and planning the economic basis for the functioning of the city and its future development. These basic functions influence the creation and planning of appropriate city infrastructure, job creation, etc. However, for the life of the people in the city, it is necessary to provide additional social functions, such as information, education, recreation, and so on. The above two types of functions of one city and the local government indicate that they are in fact a conglomerate of several basic and social functions exercised in the area. As for the competitiveness of cities and local governments, the basic functions are the ones that are of greater importance and there is a need to evaluate their value in that domain.

Local self-governments and cities must provide the best possible environment in which the realization of bigger direct investments will provide conditions for job creation and an improvement of salaries of already employed people and thus better overall life of the population. Local governments must create LED plans that allow them to compete competitively in the local through to regional and national context to the global environment.

As it is well known, uneven regional development is one of the biggest problems Serbia faces. Investments represent one of the indicators of these regional differences between cities and local governments in Serbia.

Namely, investors are interested in cities and local self-governments, having in mind several important characteristics starting from the geographical location, infrastructure of the existing production, personnel profiles, and work of the local self-government to successful examples of implemented investments. For this reason, local governments must constantly improve their investment conditions and thus increase their competitiveness.

Following the best practices of the European Union, in 2007, NALED launched a program for the certification of municipalities, cities and local governments with a favorable business environment in order to a create positive business environment and increase the level of investment in local governments, the number of employees, the average salary of employees, etc. The project was made possible with the institutional support of the Ministry of Economy and Regional Development of the Republic of Serbia, with the aim of familiarizing local self-governments with the standards they need to meet in order to be eligible and certified.

BFC is a procedure that introduces rules and enables tools for assessing the quality of services to businesses by municipalities. The certification is intended for all municipalities, cities and local governments in the Republic of Serbia who want to improve the conditions for business in their communities, attract new investments, and stimulate the development of the local economy.

In addition to the financial benefits, communication with the local administration, professional and accurate behavior, as well as a positive expectation of partnership in the future are also important for business. Investors appreciate the most realistic picture of the environment in cities and municipalities, which implies predictability of the duration and cost of all individual procedures, starting from the construction of facilities and its traffic and energy connections for general infrastructure supply through to labor employment and company registration to the payment of all duties.

Twelve criteria were established as a basis for evaluation of whether and to what extent a municipality [78], i.e., the city, met the standards of a favorable business environment. These twelve criteria, which are used in BFC process in the Republic in Serbia, are as follows [79]:


A favorable business environment is provided by those municipalities, i.e., cities, that meet 75% of the above criteria. The official certificate is issued by NALED and the Ministry of Economy and Regional Development of the Republic of Serbia, as a document that investors use, like a proof, showing that a particular local government offers everything for a successful start-up.

Today, more than one-third of all local self-governments in the Republic of Serbia are improving their business environment and participating in NALED's certification program, and more than 20 municipalities have earned the Certificate of Favorable Business Environment.

The certification criteria give clear guidance to municipalities and cities on the type and quality of services they should offer well as recommendations on what reforms they need to implement. The ultimate goal of the certification is to strengthen the competitiveness, promote investments, increase employment, and, as a final goal, raise the standard of living in the Republic of Serbia.

The establishment of the criteria is a process that takes place in real time and involves upgrading them both in quality and quantity. Given the rapid development of human society at the beginning of the 21st century in the era of the fourth industrial revolution, the BFC certification process itself is therefore subject to constant evaluation and mandatory recertification every two years.

The certification program for municipalities with a positive business environment is unique in the Republic of Serbia and includes several activities.

In the BFC process, at the beginning of each current year, NALED defines the significance of the criteria as the average score of the previous levels of assessment, which is often referred to as the relative importance of the observed criteria *C<sup>i</sup>* (*i* = 1, . . . ,12) [10,44,45,61]. It is given in Table 1 by NALED's criterion validity rating.


**Table 1.** NALED's evaluation criteria importance of the BFC process in Republic of Serbia.

Using the assessment of the fulfillment of the criteria of each local government in certification by experts and the established formula that it is necessary to meet at least 75% for each criterion, it is possible to determine which city, i.e., local government, deserves certification. Additionally, since it is very useful to plan the intensity of investment required for local economic development and to achieve this in the coming period, the evaluation of the criteria by the NALED experts given in Table 1 is useful in determining the rank of each local government, which can be done using some of the known methods of MCDM.

As it is mentioned in this introduction, this paper aimed to evaluate the efficiency of this BFC process, considering the amount of investment per capita as an output criterion. The investment per capita is one of the most important attributes in measuring local economic development success. In our case, this output was chosen in order to make a comparative analysis of the ability of local governments to attract direct investment as well as to determine whether efficiency is related to the level of fulfillment of defined input criteria.

### **4. Background**

As we already indicated in the introduction, we will consider the integration of different multicriteria methods in order to evaluate the efficiency of the BFC certification process of cities and local governments in the Republic of Serbia. The integration procedure of the DEA, SFA, and classification method is proposed to improve the features of the individual methods. The case study of the efficiency evaluation of BFC process in the Republic of Serbia cities is used to verify the goodness of the proposed procedure. However, it could be generally applied in other cases of evaluating the efficiency of a univariate or multivariate process, aforementioned in Section 3 (BFC process).

Namely, it is known that within the MCDA, we generally identify at least one decision maker (DM), who is solely responsible for making the decision, whatever it may be. The DM chooses one of several alternative decisions, judging them by a set of criteria, attributes, or points of view, and which are most often opposed to each other. The DM may express some preferences regarding the alternatives and criteria offered and use the MCDA algorithms as parameters to find a solution to the problem. These problems, as the ultimate implication of a decision, fall into various possible categories:


MCDA is applied in many different fields of human activity as we already stated in Section 2 [17–29]. The process of MCDA is often complex and depends on the specific area where it is applied and whether it has and what are the preferences of the decision maker. As a result, many different algorithms and their implementations have arisen [80,81]. Several papers have dealt with more or less successful attempts to simplify decision-making by choosing the best algorithm for MCDA problems [82].In one of them [83], this procedure is divided into several steps. We look at inputs and outputs as attributes or criteria for evaluating the decision-making unit (DMU), while minimizing inputs and/or maximizing results as associated goals. With this approach, we can practically consider this process as one ranking that leads to a classification of basically two possible groups, MCDA and DEA formulations [12,13,84].Additionally, the classification of basically two possible efficient and

inefficient groups and SFA coincides if we consider the input and output attributes as single function variables [14]. Luckily, in the literature, classifications can be encountered using DEA and SFA together with other MCDA methodologies, such as Promethee's multicriteria decision-making (MCDM) [23].

The DEA and SFA methods can be used to solve the problem of determining the efficiency of the BFC process of cities and municipalities in the Republic of Serbia. For this purpose, a synthetic summary indicator should be created, taking into account all input and output attributes used to accomplish the BFC process itself. The DEA efficiency measure is defined as the ratio of the weighted output t to the weighted input. The efficiency measure enables the aggregation of all the observed inputs (outputs) into one virtual input (output) representing the sum of the product of the coefficients and values of the inputs or outputs, which is necessary, which implies solving the problem of expressing the input and output data in ranges of values that are mutually comparable.

### *4.1. DEA*

As it is aforementioned, the DEA has been widely used for assessing the relative efficiency of decision-making units (DMUs)in an observing set. All DMUs use the same multiple commensurate inputs to produce multiple commensurate outputs. The original efficiency definition is given in [85] and it generalizes single-input to single-output ratios in the definition of efficiency, as the ratio of the sum of weighted outputs to the sum of weighted inputs. Let us suppose that we have a set of n DMU<sup>s</sup> (DMU*j*, *j* = 1, . . . ,*n*), which uses inputs *xij* (*i* = 1, . . . ,m) to produce outputs *yrj* (*r* = 1, . . . ,s). The absolute efficiency measure model is as follows [86]:

$$E\_j = \frac{\sum\_{r=1}^{s} u\_r y\_{rj}}{\sum\_{i=1}^{m} v\_i x\_{ij}} \,\tag{1}$$

where *v<sup>i</sup>* (*i* = 1, . . . ,m) are input multipliers and *u<sup>r</sup>* (*r* = 1, . . . ,s) are output multipliers (weights). The above definition corresponds to a discrete MCDM. The determination of weights is a very sensitive and complicated process. The idea behind the DEA model is to avoid a priori weights determination. The authors of the DEA model in Charnes et al. [87] allowed each DMU to choose the most appropriate set of weights, with the goal of becoming as efficient as possible compared with the other units in the observing set. The linear programming (LP) weighted form of the basic constant return to scale model (DEA CCR or DEA CRS) with output orientation [87] is as follows:

$$\mathbf{u}(\text{min})h\_k = \sum\_{i=1}^{m} v\_i \mathbf{x}\_{ij\prime} \tag{2}$$

such that:

$$\sum\_{r=1}^{s} u\_r y\_{rk} = 1,\tag{3}$$

$$\sum\_{i=1}^{m} v\_i x\_{ij} - \sum\_{r=1}^{s} u\_r y\_{rj} \ge 0, \ j = 1, \dots, n,\tag{4}$$

$$v\_i \ge 0, \ i = 1, \ldots, m \; ; \; u\_r \ge 0, \; r = 1, \ldots, s. \tag{5}$$

The optimal efficiency scores *h<sup>k</sup>* are obtained by solving the linear model of Equations (1)–(5) *n*—times (once for each DMU with the goal of comparing it with other DMUs). As a solution of basic Charnes, Cooper, and Rhodes (CCR) DEA models [87], all efficient units are assessed with the even efficiency scores *h<sup>k</sup>* (*k* = 1, . . . ,*n*) equal to 1 while the other inefficient ones are assessed with a score greater than 1 (it is usually calculated as the reciprocal value less than one). All inefficient units are enveloped by the production frontier, consisting of efficient DMUs. The efficient DMUs are composed

of real-efficient or virtual-composite peer units(lying on the efficient frontier) for each of the inefficient DMUS. This model is transformed into the so-called Banker, Charnes and Cooper (BCC) model, which is described in [88], to incorporate the variable return to scale assumption. Namely, with respect to the DEA CRS model, the DEA BCC or DEA VRS model has an additional variable *u*\* that defines the position of the auxiliary hyperplane lying above or at each DMU included in the analysis and checks that the specified DMU has reached the desired output level with minimum input engagement and that all possible overlapping hyperplanes of all DMUs are selected from the one that has the least horizontal distance from the observed DMU to this hyperplane. For *u*\* = 0, the BCC model is reduced to the CCR model:

$$(\text{min})h\_k = \sum\_{i=1}^{m} v\_i \mathbf{x}\_{ij} - u \mathbf{\*}\_{\prime} \tag{6}$$

such that:

$$\sum\_{i=1}^{m} v\_i \chi\_{ij} = 1,\tag{7}$$

$$\sum\_{i=1}^{m} v\_i x\_{ij} - \sum\_{r=1}^{s} u\_r y\_{rj} - u \ast \ge 0, \ j = 1, \dots, n,\tag{8}$$

$$\text{If } v\_l \ge 0, \text{ } i = 1, \dots, m \text{ ; } u\_r \ge 0, \text{ } r = 1, \dots, s. \text{ } \tag{9}$$

It must be noted on this place that, from these two basic DEA CCR and DEA BCC models, many other variants and extensions of DEA have been developed to solve real-world problems.

### *4.2. SFA*

Stochastic frontier analysis is a parametric approach to efficiency measurement introduced by Aigner, Lovell, and Schmidt [89] and Meeusen and Van den Broeck [90]. It takes into account the error of measurement in estimating the efficiency of the firm under observation.

Let us assume that firm *j* (*j* = 1, . . . ,*n*) produces the output level *y<sup>j</sup>* by using inputs given as a vector **x***<sup>j</sup>* . The production function is given as *f*(**x***<sup>j</sup>* , β), where β is a parameter vector to be estimated. The output level is also under the effect of the efficiency ξ*<sup>j</sup>* and random error *v<sup>j</sup>* . Finally, output production for the firm *j* is given by the form:

$$y\_j = f(\mathbf{x}\_{j\prime}\boldsymbol{\beta})\xi\_j v\_j \tag{10}$$

Since *u<sup>j</sup>* represents the level of efficiency for firm *i*, it must be in the (0; 1]. The firm is efficient if ξ*<sup>j</sup>* = 1; otherwise, it is inefficient. The aim is to estimate the vector parameters β, ξ*<sup>j</sup>* and *v<sup>j</sup>* , so as to maximize the ξ*<sup>j</sup>* of the firm under observation. For this purpose, the natural logs of Equation (11) together with the assumption that the production function of *k* inputs is linear in logs are taken as:

$$\ln(y\_j) = \beta\_0 + \sum\_{i=1}^{m} \beta\_{ij} \ln(\mathbf{x}\_{ij}) + v\_j - u\_{i\nu} \tag{11}$$

where *u<sup>j</sup>* = −ln ξ*<sup>j</sup>* represents the level of inefficiency, while *v<sup>j</sup>* represents identically and independently the distributed random error. A stochastic frontier is given by β<sup>0</sup> + P*m i*=1 β*ij* ln(*xij*) + *v<sup>j</sup>* , while *u<sup>j</sup>* indicates the inefficiency level.

After estimating the parameters [91] for the given Equation (11), the technical efficiency of firm *i* can easily be calculated as a relative distance of the actual output to the estimated stochastic frontier:

$$TE\_{\dot{j}} = -e^{\mu\_{\dot{j}}} \tag{12}$$

### *4.3. Classification 4.3. Classification*

Classification is an important technique, commonly used in expert systems in order to support the domain experts to identify knowledge within the large volume of data. Classification is an important technique, commonly used in expert systems in order to support the domain experts to identify knowledge within the large volume of data.

*Symmetry* **2020**, *12*, x FOR PEER REVIEW 9 of 25

*<sup>j</sup> <sup>u</sup> TE e <sup>j</sup>* = − (12)

Classification is considered the task of supervised learning of data mining (DM) and machine learning (ML), where the dataset is divided into classes (two and more) and each instance of the set has a tag identifying the class to which it belongs. Supervised machine learning algorithms are used to induce a classifier from a set of properly classified instances, i.e., expensive training. The test set, as a set of properly classified data instances, is used to measure the quality of a classifier obtained through the training process. Different types of models are used to represent classifiers and there are numerous algorithms available to induce classifiers from data: Logistic regression, decision trees, neuron network, k-nearest neighbor, and support vector machines, usually named neural networks [92–94]. For our case study, we chose naïve Bayes. Classification is considered the task of supervised learning of data mining (DM) and machine learning (ML), where the dataset is divided into classes (two and more) and each instance of the set has a tag identifying the class to which it belongs. Supervised machine learning algorithms are used to induce a classifier from a set of properly classified instances, i.e., expensive training. The test set, as a set of properly classified data instances, is used to measure the quality of a classifier obtained through the training process. Different types of models are used to represent classifiers and there are numerous algorithms available to induce classifiers from data: Logistic regression, decision trees, neuron network, k-nearest neighbor, and support vector machines, usually named neural networks [92–94]. For our case study, we chose naïve Bayes.

Bayesian classifiers imply that the knowledge of an event is described by the probability of its occurrence. The naïve Bayes classifier requires a small amount of training data, so this classifier could be easy implemented, and experience to date shows that in the case if independent predictors, better results are provided as compared to other classifiers [95–97]. Bayesian classifiers imply that the knowledge of an event is described by the probability of its occurrence. The naïve Bayes classifier requires a small amount of training data, so this classifier could be easy implemented, and experience to date shows that in the case if independent predictors, better results are provided as compared to other classifiers [95–97].

The basic measure of classifier success is the confusion matrix, which is given in Figure 1. Additionally, apart from the confusion matrix, it is useful to define several other measures of classification success, such as the accuracy, precision, recall, F measure and area under the receiver operating characteristics (ROC) curve. The basic measure of classifier success is the confusion matrix, which is given in Figure 1. Additionally, apart from the confusion matrix, it is useful to define several other measures of classification success, such as the accuracy, precision, recall, F measure and area under the receiver operating characteristics (ROC) curve.


**Figure 1.** Confusion matrix for the classification process. **Figure 1.** Confusion matrix for the classification process.

Accuracy = (TP + TN)/N, Precision = TP/(TP + FP), Recall = TP/(TP + FN), wherein TP-True Positive; TN-True Negative and N-total number of samples (instances) in a dataset. Accuracy = (TP + TN)/N, Precision = TP/(TP + FP), Recall = TP/(TP + FN), wherein TP-True Positive; TN-True Negative and N-total number of samples (instances) in a dataset.

The accuracy measure is unreliable in the case of a very unequal distribution of instances between classes (so-called skewed classes). Therefore, it is necessary to make a compromise between the measures of precision and recall in practice. The F measure combines precision and recall measures and the so-called F1 is an F measure, which gives equal importance to both of these two measures so F1 measure = 2 × Precision × Recall/(Precision + Recall). The accuracy measure is unreliable in the case of a very unequal distribution of instances between classes (so-called skewed classes). Therefore, it is necessary to make a compromise between the measures of precision and recall in practice. The F measure combines precision and recall measures and the so-called F1 is an F measure, which gives equal importance to both of these two measures so F1 measure = 2 × Precision × Recall/(Precision + Recall).

The ROC curve illustrates the diagnostic ability of a binary classifier system using a comparison of recall (sensitivity) and FPR(specificity)=TN/(TN+FP). The ROC curve illustrates the diagnostic ability of a binary classifier system using a comparison of recall (sensitivity) and FPR(specificity) = TN/(TN + FP).

Algorithms for the selection of the optimal feature subset perform a search within the space of feasible solutions. Most of the commonly used classification methods are very sensitive to the dimension of dataset and the instance/feature ratio[98]. Algorithms for the selection of the optimal feature subset perform a search within the space of feasible solutions. Most of the commonly used classification methods are very sensitive to the dimension of dataset and the instance/feature ratio [98].

The selection algorithm searches for a subset of attributes that provide the best result. The concept of feature ranking is limited and oriented to those classifiers that are very sensitive to the initial ordering of the input features. We proposed a ranker evaluation approach for the detection of attributes because it ranks the attributes by its importance. Weka is software that reduces the information volume [99], reducing it by the application of various algorithms and techniques, respectively, that could be suggested as the ranking approach in the previous sentence. The selection algorithm searches for a subset of attributes that provide the best result. The concept of feature ranking is limited and oriented to those classifiers that are very sensitive to the initial ordering of the input features. We proposed a ranker evaluation approach for the detection of attributes because it ranks the attributes by its importance. Weka is software that reduces the information volume [99], reducing it by the application of various algorithms and techniques, respectively, that could be suggested as the ranking approach in the previous sentence.

Ranking methods for optimal feature selection evaluate a single feature by using various metrics and assign a rank, based on its performance. The evaluation metrics are commonly founded on features 'statistical properties or their expected potential. The reduction of the dimensionality of data is based on those properties [100]. Attribute selection algorithms can be broken down into filters and prior learning methods. In this paper, we chose to use three algorithms from the filter group, the measure gain ratio, which is practically derived from the measure information gain (InfoGain) and Relief-F, which perform the individual attribute ranking and was originally intended to be classified into only two classes, which is the case that we solve in this paper and the case because we are interested in whether the BFC process is effective or not.

The complexity of group correlation analysis derives from the huge number of combinations of the attributes in which interactions should be taken into consideration O(2N), where N is the number of attributes in the model [98]. The entropy commonly used in the information theory [101], which represents the "purity" of an arbitrary collection of examples. The entropy measures the system's unpredictability. The entropy of Y is given by Equation (13):

$$H(Y) = -\sum\_{y \subset Y} p(y) \log\_2(p(y))\_\prime \tag{13}$$

where *p(y)* is defined as the marginal probability density function for the random variable *Y*. Let us assume that *Y* and *X* are random variables in the training set S. If the entropy of *Y* with respect to the partitions induced by *X* is less than the entropy of *Y* prior to partitioning, the conditional entropy function is given by Equation(14):

$$H(\mathbf{Y}|\mathbf{X}) = -\sum\_{\mathbf{x}\subset\mathbf{X}} p(\mathbf{x}) \sum\_{\mathbf{y}\subset\mathbf{Y}} p(\mathbf{y}|\mathbf{x}) \log\_2(p(\mathbf{y}|\mathbf{x})),\tag{14}$$

where *p*(*y*|*x*) is the conditional probability of *y* conditional to the knowledge of *x*.

The entropy can be considered as a criterion of impurity in thetraining set *S*. Therefore, we can define a measure of the amount by which the entropy of attribute decreases to gain additional information about the attribute provided by the class [102]. This measure is known as information gain: InfoGain. InfoGain evaluates the worth of an attribute by measuring the information gain with respect to the class, according to Equation (15):

$$\text{InfoGain (Class, attribute)} = \text{H(Class)} - \text{H(Class|Attribute)}.\tag{15}$$

where H represents the information entropy.

The information gain ratio or GainRatio is the non-symmetrical measure, introduced to compensate for the bias of the InfoGain [103] by reducing it on high-branch attributes. GainRatio should be more significant when data is evenly spread or smaller when all data belong to one branch. GainRatio, which takes the number and size of branches into account when choosing an attribute, is given by Equation (16):

$$\text{GainRatio} = \frac{\text{InfoGain}}{\text{H(Class)}} \tag{16}$$

Equation (16) represents the normalization of the InfoGain, by dividing it with the entropy of class. Due to normalization, the GainRatio values fall in the range [0, 1]. The knowledge of the class fully predicts the attribute if the GainRatio is equal to 1. On the other hand, if the GainRatio is equal to 0, one can conclude that there is no relation between attribute and class. The decision tree classification methods C4.5 [104] and ID3 [105] employ the GainRatio as a criterion of the attribute selection at every node.

One of the possible filtering methods with the proceedings of the attribute ranking is ReliefF, based on the procedure of the nearest neighbors (*k*-nearest neighbors or k-NN).

The algorithm estimates and ranks each attribute with the global grade function [−1, . . . , 1]. Weight calculation is based on the probability of the nearest neighbors form two different classes with

different values for the attributes and probability that form two neighbors from the same class with the same value of attributes. with the same value of attributes. The function diff(Attribute; Instance1; Instance2) computes the difference of the attribute's

with different values for the attributes and probability that form two neighbors from the same class

The function diff(Attribute; Instance1; Instance2) computes the difference of the attribute's values obtained in two instances. For discrete attributes, the difference is either 1 (different values) or 0 (the same values), while for continuous attributes the differences are normalized on the interval [0, 1]. Kononenko [106] notes that the higher the number of instances, the more reliable ReliefF's estimates but the running time also increases. The ReliefF algorithm is given in Figure 2. We used Weka software [99] to perform the feature selection algorithms. values obtained in two instances. For discrete attributes, the difference is either 1 (different values) or 0 (the same values), while for continuous attributes the differences are normalized on the interval [0, 1]. Kononenko [106] notes that the higher the number of instances, the more reliable ReliefF's estimates but the running time also increases. The ReliefF algorithm is given in Figure 2. We used Weka software [99] to perform the feature selection algorithms.

**Figure 2.** Relief algorithm. **Figure 2.** Relief algorithm.

### **5. Methodology 5. Methodology**

According to Stewart [13], the MCDA formulation corresponds to the DEA formulation. The inputs and outputs are seen as attributes or criteria for DMUs' efficiency evaluation, where the associated objective is to minimize the inputs and/or to maximize the outputs. Practically, we can consider DEA as a non-parametric method, which leads to classification in basically two groups of efficient and inefficient decision-making units. Another option is to use a parametric SFA method, According to Stewart [13], the MCDA formulation corresponds to the DEA formulation. The inputs and outputs are seen as attributes or criteria for DMUs' efficiency evaluation, where the associated objective is to minimize the inputs and/or to maximize the outputs. Practically, we can consider DEA as a non-parametric method, which leads to classification in basically two groups of efficient and inefficient decision-making units. Another option is to use a parametric SFA method, which considers inputs and outputs as variables of the production function.

which considers inputs and outputs as variables of the production function. Classification is a methodology for dividing the dataset into two or more classes. The ReliefF is a classifier for attribute ranking, which enables future selection and reduction of the dimensionality of the database by selecting the only necessary attributes. The future selection process offers the following positive effects: Classification is a methodology for dividing the dataset into two or more classes. The ReliefF is a classifier for attribute ranking, which enables future selection and reduction of the dimensionality of the database by selecting the only necessary attributes. The future selection process offers the following positive effects:


• Speeds up the calculation. These are the reasons for proposing an algorithm that integrates efficiency assessment methods with classification methods into a framework that shows better characteristics than each of the These are the reasons for proposing an algorithm that integrates efficiency assessment methods with classification methods into a framework that shows better characteristics than each of the methods used individually.

### methods used individually. *5.1. Basic Motivation for Integrating DEA and SFA with Classification*

*5.1. Basic Motivation for Integrating DEA and SFA with Classification*  This paper attempts to optimize the process of solving the considered univariate problem of determining the efficiency of certification (BFC) of local governments in the Republic of Serbia. The This paper attempts to optimize the process of solving the considered univariate problem of determining the efficiency of certification (BFC) of local governments in the Republic of Serbia. The authors had in mind four underlying motives, i.e., reasons for building one model that integrates

DEA, SFA, and classification methods to obtain a method with better characteristics than each model individually:

1. The results obtained with the group of DEA methods and with the group of SFA methods very often differ significantly. In our case, we classified the efficiency of BFC processes into two classes, Efficient and inefficient cities, which implies using one type of informational n-redundancy (*n* ≥ 3). This means using at least three different methods from the mandatory groups of DEA and SFA methods (in our case DEA CCR, DEA BCC and SFA) and classifying as efficient only those DMUs that are evaluated as efficient by at least two out of these three methods. *Symmetry* **2020**, *12*, x FOR PEER REVIEW 12 of 25 DEA, SFA, and classification methods to obtain a method with better characteristics than each model individually: 1. The results obtained with the group of DEA methods and with the group of SFA methods

2. Classification algorithms can be useful to assess the essential parameters, before and after the attribute selection, to determine and assess the improvement of classification obtained by reducing attributes. In our case, the naïve Bayes classifier was chosen as the most suitable for the set of a small number of training units. very often differ significantly. In our case, we classified the efficiency of BFC processes into two classes, Efficient and inefficient cities, which implies using one type of informational n-redundancy (n ≥ 3). This means using at least three different methods from the mandatory groups of DEA and SFA methods (in our case DEA CCR, DEA BCC and SFA) and classifying as efficient only those DMUs that are evaluated as efficient by at least two out of these three methods. 2. Classification algorithms can be useful to assess the essential parameters, before and after the

3. The fact that DEA as one of the most commonly used methods requires that the ratio of the total number of DMUs and input and output attributes should be at least 3 (milder condition is 2). This classification can be useful in the case of the necessity of problem dimensionality reduction. attribute selection, to determine and assess the improvement of classification obtained by reducing attributes. In our case, the naïve Bayes classifier was chosen as the most suitable for the set of a small number of training units.

4. Using a well-known attribute selection procedure might be helpful in the reduction of the number of attributes, which solves the previously mentioned limitations of the DEA, as well as the problem of reducing noise in the data. The ReliefF algorithm, as one from the group of Relief algorithms, is selected to estimate the weights of attributes and rank them. In addition, for example, in [108], one can find a number of conclusions that justify using the ReliefF algorithm. 3. The fact that DEA as one of the most commonly used methods requires that the ratio of the total number of DMUs and input and output attributes should be at least 3 (milder condition is 2). This classification can be useful in the case of the necessity of problem dimensionality reduction. 4. Using a well-known attribute selection procedure might be helpful in the reduction of the number of attributes, which solves the previously mentioned limitations of the DEA, as well as the problem of reducing noise in the data. The ReliefF algorithm, as one from the group of Relief algorithms, is selected to estimate the weights of attributes and rank them. In addition, for example,

### *5.2. Integration of the Classification and E*ffi*ciency Evaluation* in [108], one can find a number of conclusions that justify using the ReliefF algorithm.

The proposed new model for evaluating the efficiency of the certification process involves the integration of non-parametric DEA and parametric SFA models with the classification into the following algorithm shown in Figure 3. *5.2. Integration of the Classification and Efficiency Evaluation*  The proposed new model for evaluating the efficiency of the certification process involves the integration of non-parametric DEA and parametric SFA models with the classification into the following algorithm shown in Figure 3.

**Figure 3.** Efficiency evaluation model that integrates three methods: DEA, classification and SFA. **Figure 3.** Efficiency evaluation model that integrates three methods: DEA, classification and SFA.

According to Figure 3, the algorithm's steps in the proposed frameworks are: According to Figure 3, the algorithm's steps in the proposed frameworks are:

1. Data preparations assume defining DMUs and criteria of the efficiency evaluation, collecting and cleaning necessary data and handling missing values. 1. Data preparations assume defining DMUs and criteria of the efficiency evaluation, collecting and cleaning necessary data and handling missing values.

2. Determining the efficiency scores using the three models (DEA CRS, DEA VRS and SFA). Using models with different assumptions allows deeper insight into inefficiency sources and result 2. Determining the efficiency scores using the three models (DEA CRS, DEA VRS and SFA). Using models with different assumptions allows deeper insight into inefficiency sources and result verifications. The DMUs are classified into two classes (efficient and inefficient subsets) and partially ranked within the class of inefficient ones.

3. Using the obtained efficiency scores as a two-class classification attribute (efficient and inefficient) for the assessment of the essential parameters characterizing the quality of classification, precision, and accuracy by the F-measure [109]. The naive Bayes classification model is selected, like the most appropriate one for the small-set classification [110]. If the quality measures are satisfactory, go to step 5; otherwise, go to step 4.

4. The attribute selection process uses the ReliefF classifier, as the one that individually evaluates each of the attributes and rank them. This ranking provides a base for selecting a subset of parameters that are relevant and checks the eligibility conditions for applying DEA methods by checking the allowed ratio of the number of attributes and units. Go to step 2.

Steps 2, 3 and 4 can be repeated until satisfactory results have been obtained.

5. The definitive ranking of DMUs and analyzing the final results.

### **6. Case Study: Evaluating of the E**ff**ectiveness of the BFC Process**

As we mentioned, the authors aimed to propose one new model for determining the efficiency of a successful BFC process in attracting foreign direct investments. The BFC process has been carried out since 2007. It was completed successfully in 21 cities and municipalities in the Republic of Serbia until 2013. The main idea was to evaluate the efficiency of the BFC process in those cities and municipalities using the model given in Figure 3. The efficiency was assessed as the success of attracting investments taking into account the achieved level of the 12 certification criteria given in Section 3.

### *6.1. Data*

The cities and municipalities that completed the BFC process, excluding one outlier, are considered as 20 DMUs in the efficiency evaluation (*j* = 1, . . . ,20). The 12 relevant BFC criteria, according to NALED's methodology and their importance, are given in Table 1 in Section 3. In the efficiency evaluation, the average values of these 12 criteria (C1–C12) are used as inputs (x*ij*, *i* = 1, . . . 12, *j* = 1, . . . 20), while the amount of investment per capita is used as an output (*y<sup>j</sup>* , *j* = 1, . . . 20). The case study of the efficiency evaluation of the BFC process of cities and local governments in the Republic of Serbia uses data, provided by NALED. The input and output criteria database together with BFC scores and ranking according to NALED's methodology (normalized value of *C<sup>i</sup>* × *w<sup>i</sup>* , *i* = 1, . . . ,12) are presented in Table 2.

**Table 2.** Descriptive statistics on data and NALED's evaluation.


The descriptive statistics show that the input criteria values drop into a relatively small range of 0.46 to 1.09, with a standard deviation from 0.02 to 0.15. Therefore, the BFC process accomplishment is evaluated with scores 0.81 to 0.98. The municipalities are expected to attract a relatively even amount of investments considering the BFC process evaluation. However, the investments per capita range from 111.78 to 995.82, which is expected to make an impact on the efficiency evolution.

### *6.2. E*ffi*ciency Evaluation: Preliminary Results and Classification*

The preliminary results of the efficiency evaluation of BFC process in the 20 municipalities, according to the criteria given in Table 2, are given in Table 3. The second and third columns show the efficiency results obtained using the DEA model of Equations (2)–(5) with the assumption of a constant to return (CRS) economy. The CRS assumption is stricter than a variable to return economy assumption imposed in the DEA VRS model (Equations (6)–(9)), with the results given in the fourth and fifth columns. The results of the parametric SFA model are given in the last two columns of Table 3.


**Table 3.** Scores, ranks and descriptive statistics (12 input criteria).

As we already mentioned, all three methods provide classification into subsets of efficient and inefficient municipalities. Nevertheless, the size of the subsets varies depending on the used methodology. The DEA CCR model produces the subset of four efficient municipalities (municipality 6, 7, 10 and 15), with an average efficiency score of 0.684 and standard deviation of 0.275. On the other hand, only 6 out of 20 municipalities are assessed as inefficient, with a mean value of 0.927. All municipalities assessed as inefficient according to the DEA CRS model exhibit increasing returns to scale. The size of the SFA efficient subset lies between the two obtained by DEA. It consists of nine municipalities, while the average efficiency score of the whole set is 0.759 (stdev = 0.241).

The most unrealistic results are obtained using the DEA VRS model, which allows the highest degrees of freedom among the three used efficiency evaluation methods. Those results are expected, taking into account that the number of 13 criteria (inputs and outputs) is too big in comparison with 20 DMUs. The optimal number of DMUs for DEA efficiency evaluation should be greater or equal

to (12+1) × 3 = 39 according to the rule of thumb given in the literature [107]. On the contrary, the number of criteria for the 20 DMUs should be a maximum of 7. Therefore, in step two, we performed the classification using the naive Bayes algorithm in order to check the data model's validity and the results are given in Table 4.


**Table 4.** Bayes classification results based on the DEA CRS score (four efficient DMUs).

Instep three, we performed feature selection using only the Belief classifier because the InfoGain and GainRatio classifiers give only one attribute as important: C12. The results, which are given in Table 5, show that only five attributes, i.e., the input criteria, are in the group of important ones: C12, C6, C2, C8 and C9, respectively. In this procedure of criteria selection, all those criteria that have a value of at least an order of magnitude lower than the previous one were rejected and treated as insignificant.

**Table 5.** Feature selection using the Belief classifier.


In step four, we reapplied the three redundant methodology for efficiency evolution (DEA CRS, DEA VRS, and SFA). Based on the obtained results, given in Table 6, we conclude that now we have only two municipalities (7 and 15) that belong to the efficient subset according to the DEA CRS model, while the DEA VRS model classifies six municipalities into the efficient subset. Once again, SFA proves the middle way, with a subset of five efficient municipalities.

**Table 6.** Scores, ranks and descriptive statistics (5 attributes).



**Table 6.** *Cont.*

In this step, we also repeated classification using the naïve Bayes algorithm. The results and parameters for classification with five input criteria, given in Table 7, clearly show better values are achieved for all parameters than in the case of the efficiency evaluation with 12 criteria. The precision at this stage is 0.9 (in comparison to 0.778), recall is equal to 1 (former value was 0.875), and consequently the F-measure is almost 1 (0.978 in comparison to 0.824). Those results are obvious indications that attribute (criteria) selection and reduction of the problem dimensionality led to a positive improvement.

**Table 7.** Bayes classification results based on the DEA CRS score (two efficient DMUs).


The results presented in Table 6 might be considered as the final.

### **7. Results Discussion**

The subset of five criteria selection is justified by the increased parameters of quality evaluation after problem dimensionality reduction. The correlation between the ranks obtained before and after reduction shows that the DEA CRS model is the most robust (correlation coefficient is equal to 97.47% at the significance level of 0.001). The correlations for the other two models (DEA VRS and SFA) are 46.79% and 49.39% with no statistical significance. The degrees of freedom together with criteria values are important factors for DEA efficiency evaluation (less criteria led to less efficient DMUs as proven by the obtained results). On the other hand, SFA as a measure of the central tendency also relies also on the criteria values and it is very sensitive to their selection.

To analyze the rank similarity between the final individual efficiency scores obtained by using the three models (Table 6), we computed Spearman's rank correlation between them. We found a DEA rank correlation of 59.96% with no statistical significance at 0.01levels due to different economy of scale assumptions. Actually, all efficient municipalities under the constant return to scale (CRS) assumption remain efficient under the variable return to scale (VRS) assumptions, but the number of efficient DMUs increased. The diversity in the ranks of the individual DMUs might be better explained by introducing the NELED's ranks given in Table 3. For example, municipality 5 is ranked in 19th place according to NALED. It is efficient under VRS but ranked in 15th place under the CRS assumption. By comparing it with municipality 18 (NALEDs top-ranked DMU), we can conclude that municipality 5 attracted more investment per capita (315.95) than municipality 18 295.83. Therefore, municipality 5

is a benchmark DMU for municipality 18. At the same time, NALED's bottom-ranked municipality 17 attracted a smaller amount of investments per capita and cannot be benchmark for municipality 5 if the model considers scale of economy as an important factor in the efficiency evaluation.

When it comes to the correlation between the parametric and nonparametric rankings, there is a correlation of 48.58% between the DEA CRS and SFA ranks and a correlation of 50.79% between the DEA VRS and SFA ranks with no statistical significance, which is in line with a previous study [111]. The authors of [112] stressed that the contradictory results obtained by DEA and SFA might be expected since they have different degrees of dispersion and perform rankings differently. The former one is a frontier deterministic method while the latter one is a central tendency stochastic method, which takes statistical errors into account. The implication of this divergence is given in [113], where the authors stated that the application of only one methodology for ranking may lead to misuse, especially in the case when there is no significant correlation between different models.

### **8. Conclusions**

The initial hypothesis from which the main goal of this paper arose was it is possible to construct an efficiency assessment model that integrates different methods and produces better characteristics than any of the methods involved? This paper proposed a framework for integrating the efficiency assessment non-parametric DEA CRS and DEA VRS and parametric SFA models with the machine learning algorithm for classification and quality evaluation. It was checked and justified by implementing it on the real-world case study of BFC certification of cities in the Republic of Serbia.

Having in mind the existence of the real-world problem of efficiency evaluation, such as evaluation of the BFC process, which includes a large number of influencing factors in comparison with the number of units of local governments, there is an expressed need to reduce the problem dimensions in order to obtain better results. In such a case, the classification method in synergy with the future selection of attributes is realistically the right choice.

In this paper, we successfully integrated representative methods of the two most commonly used approaches in assessing efficiency, DEA as non-parametric and SFA as parametric, with a machine learning classification method to reduce the number of criteria. Therefore, the novel model takes advantage of the excellent characteristics of each of employed method and eliminates the bad ones using dimension optimization:


The suggestion is to use the DEA CRS model as the most robust and strict one for result verification. DEA, as a non-parametric model with no a priori weight assignment, is a very suitable method for efficiency evaluation in the presence of multiple inputs and outputs [81]. It is also a primary choice if there is a lack of some input and output criteria, which might be compensated by the advantages of another one. Therefore, criteria classification and selection are essential. SFA is the method of choice for when numerous criteria exist and in the case of the necessity of inclusion of the stochastic nature of the parameters. All in all, these two methodologies (DEA and SFA) might be imposed as corrective factors to one another.

It is important to remark on the possible uncertainty, imprecision, and subjectivity of input data determination, which is the case in the considered BFC process, and implies the necessity of adding methodologies, which decreases the impact of this deficiency on the results obtained with the proposed methodology. This is, for example, the case with the methodologies based on the fuzzy, interval rough set and interval neutrosophic rough set theory, which are considered in the papers [114–120]. This could be the subject of future work of the authors.

**Author Contributions:** Conceptualization: M.J.; Project administration: S.N.; Writing—original draft, Validation: M.R.; Writing—review and editing, Formal analysis: G.S.; Software: V.S.; Investigation: V.S.; Supervision, Methodology: D.R. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Ministry of Education, Science and Technological Development, Republic of Serbian, grants number III44007.

**Conflicts of Interest:** The authors declare no conflict of interest.

### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

*Article*
