*3.2. Methodological Framework Using GMM System*

Considering the ROAA as the dependent variable, and the independent variables defined before, the model (1) is established:

$$\text{ROAA}\_{it} = \beta\_0 + \beta\_1 \text{ROAA}\_{it,1} + \beta\_2 \text{NLA}\_{it} + \beta\_3 \text{ETA}\_{it} + \beta\_4 \text{CIR}\_{it} + \beta\_5 \text{SIZE}\_{it} + \varepsilon\_{it} \tag{1}$$

where ε*it* is the random disturbance.

The model was estimated by using the GMM panel data methodology which has two important advantages regarding cross-section analysis. Firstly, it controls individual heterogeneity, and this fact is very important because the ROAA depends on managemen<sup>t</sup> decisions, and this circumstance could be very closely related to the specificity of each bank. Secondly, the methodology resolves the endogeneity problem between the dependent variable and some of the explanatory variables, using lagged values of the dependent variable in levels and in differences as instruments. Thus, with this methodology, there is no correlation between endogenous variables and the error term, obtaining consistent estimates (Dietrich and Wanzenried 2014).

Therefore, the model was estimated using certain instruments, following Blundell and Bond's (1998) suggestion, when deriving the system estimator used in this paper. Note that the system GMM estimator also controls for unobserved heterogeneity and the persistence of the dependent variable. The regression was performed by using a two-step dynamic panel with equations at levels, as suggested by the same authors. García-Herrero et al. (2009) also say that the GMM system for an unbalanced panel model employs all possible instruments, and thus non-significant independent variables will be suppressed in a way that results are more effective.

### *3.3. Methodological Framework Using Value-Based DEA Method*

There are different ways to evaluate efficiency; the parametric methods assume a pre-defined functional relationship between the resources and the products. Usually, they use averages to determine what could have been produced. The non-parametric methods, among which is the data envelopment analysis (DEA) method, do not make any functional assumptions and considers that the maximum that could have been produced is obtained by observing the most productive units. The underlying idea is to compare a set of similar units and then identify those that show best practices. Although

the e fficiency concept is not always accurate, in most of the cases the Pareto–Koopmans definition is usually followed. The formal definition stated by Charnes et al. (1978) says that "A unit is full efficient if and only if it is not possible to improve any input or output without worsening some of other input or output." This definition avoids the need for explicitly specifying the formal relations that are assumed to exist between inputs and outputs and there is no need to have prices or other assumptions of weights, which are supposed to reflect the relative importance of the di fferent inputs or outputs.

It is acknowledged and confirmed by several studies that multiple criteria decision aiding (MCDA) approaches are widely used in finance (for a comprehensive review, see Zopounidis et al. 2015). The value-based DEA method developed by Gouveia et al. (2008) is a variant of the additive DEA model (Charnes et al. 1985) with oriented projections (Ali et al. 1995), in order to overcome some of its drawbacks by applying concepts from multi-attribute utility theory (MAUT). MAUT is one of the most popular analytic tools associated with the field of decision analysis (Keeney and Rai ffa 1976). In the spirit of MAUT, the inputs (factors to be minimized) and outputs (factors to be maximized) are firstly converted into value functions. This transformation allows dealing with negative data, which is a di fficulty in classical DEA models (Charnes, Cooper and Rhodes – CCR model and Banker, Charnes, and Cooper - BCC model).

The set of *n* DMUs to be evaluated is: -*DMUj* : *j* = 1, ... , *n* . Each *DMUj* is evaluated on m factors to be minimized *xij* (*i* = 1, ... , *m*) and *p* factors to be maximized *yrj* (*r* = 1, ... , *p*).

The measure of performance on criterion *c* is: *vc*(*DMUj*), *c* = 1, ... , *q*, with *q* = *m* + *p*, *j* = 1, ... , *n* based on a value function (or utility function) *vc*(.).

Considering that *pcj* is the performance of DMU *j* in factor *c*, the value functions must be defined such that, for each factor *c* the worst *pcj*, *j* = 1, ... , *n*, has the value 0 and the best *pcj*, *j* = 1, ... , *n*, has the value 1, resulting in a maximization of all factors. Therefore, the value functions are defined in the range [0, 1], which overcomes the scale-dependence problem of the additive DEA model.

A preliminary phase of the value-based DEA method comprises the assessment of marginal (partial) value functions on each criterion to establish a global value function. According to the additive MAUT model, the value obtained is *V DMUj* = *q <sup>c</sup>*=1 *wcvc DMUj* , where *wc* ≥ 0, ∀*c* = *1,* ... *,q* and *q <sup>c</sup>*=1 *wc* = 1 (by convention). The weights *w*1, ... , *wq* considered in the aggregation are the scale coe fficients of the value functions and are established such that each alternative minimizes the value differencetothebestalternative(bank),accordingtothe"min-maxregret"rule.

 After the preliminary phase in which the factors (to be minimized and to be maximized) are converted into value scales, the value-based DEA method can be described in two phases:

**Phase 1:** Compute the e fficiency measure, *d*∗ *k*, for each DMU, *k* = *1,* ... *,n*, and the corresponding weighting vector *w*<sup>∗</sup> *k*by solving the linear problem (2).

**Phase 2:** If *d*∗ *k* ≥ 0 then solve the "weighted additive" problem (3), using the optimal weighting vector resulting from Phase 1, *w*<sup>∗</sup> *k*, and determine the corresponding projected point of the DMU under evaluation.

Formulation (2) considers the super-e fficiency concept (Andersen and Petersen 1993), which allows the discrimination of the e fficient units when assessing the *k*-th DMU (Gouveia et al. 2013):

$$\begin{aligned} \min\_{\begin{subarray}{c} d\_k w \\ d \end{subarray}} d\_k \\ \text{s.t. } \sum\_{c=1}^q w\_c v\_c (DMLl\_j) - \sum\_{c=1}^q w\_c v\_c (DMLl\_k) \le d\_k, \ j = 1, \dots, n; j \ne k \\ \sum\_{c=1}^q w\_c = 1 \\ w\_c \ge 0, \forall c = 1, \dots, q \end{aligned} \tag{2}$$

The efficiency measure, *d*∗*k*, for each DMU *k* (*k* = *1,* ... *,<sup>n</sup>*) and the corresponding weighting vector are calculated by solving the linear problem (2). The optimal value of the objective function *d*∗*k* provides the distance in terms of the difference in value for the best of all DMUs (note that the best DMU will also depend on w), excluding this from the reference set. If the score obtained in (2), *d*∗*k*, is not positive, then the DMU *k* under evaluation is efficient, otherwise, it is inefficient.

In case the DMU is inefficient, Phase 2 finds an efficient target by solving the linear problem (3):

$$\begin{aligned} \min\_{\lambda, \mathbf{c}} z\_k &= -\sum\_{c=1}^q w\_c^\* s\_c \\ \text{s.t. } &\sum\_{\substack{j=1, j \neq k \\ j=1, j \neq k}}^n \lambda\_j v\_c \big(DMI\_j \big) - s\_c = v\_c (DMI\_k)\_c \ c = 1, \dots, q \\ &\sum\_{j=1, j \neq k}^n \lambda\_j = 1 \\ \lambda\_j, s\_c &\ge 0, \ j = 1, \dots, k-1, k+1, \dots, n; \ c = 1, \dots, q \end{aligned} \tag{3}$$

The variables <sup>λ</sup>*j*, *j* = 1, ... ,*k* − 1,*k* + 1, ... ,*n* defines a convex combination of the value score vectors associated with the *<sup>n</sup>*−1 DMUs. The set of efficient DMUs defining the convex combination with λ*j* > 0 are called the "peers" of DMU *k* under evaluation. The convex combination corresponds to a point on the efficient frontier that is better than DMU *k* by a difference of value of *sc* (slack) in each criterion c.
