*3.1. Problem Formulation*

Let **D** = {D[1], ... , D[K]} be a group of K decision makers, *G* = { **G**1, ... , **G** m} be a partition of **D**, with **<sup>G</sup>**g = <sup>D</sup>[ig,1],...,D[ig,ng ] ⊆ **D**; g = 1, ... , m, **<sup>G</sup>**g ∩ **<sup>G</sup>**g = ∅ if g = g , **D** = m ∪ g=1 **<sup>G</sup>**g. To avoid identifiability problems we impose that ig,1 < ... < ig,ng and ig,1 < ig ,<sup>1</sup> if g < g .

The problem is to select the *G* partitions that best describe the opinions expressed by the decision makers about the alternative to be chosen from the judgments issued **Y** = **y**(k); k = 1, . . . , K . To this end we extend the approach made in the previous section to the case of several groups, assuming that the decision makers of each group { **<sup>G</sup>**g; g = 1, ... , m} of the partition *G* have homogeneous opinions regarding the priorities of each alternative of set **A** so that:

$$\mathbf{y}\_{\overline{\mathbf{i}}\rangle}^{(\mathbf{k})} = \boldsymbol{\mu}\_{\mathbf{i}}^{(\mathbf{g}(\mathbf{k}))} - \boldsymbol{\mu}\_{\overline{\mathbf{j}}}^{(\mathbf{g}(\mathbf{k}))} + \boldsymbol{\varepsilon}\_{\overline{\mathbf{i}}\overline{\mathbf{j}}}^{(\mathbf{k})} \text{ with } \boldsymbol{\varepsilon}\_{\overline{\mathbf{i}}\overline{\mathbf{j}}}^{(\mathbf{k})} \sim \mathcal{N}\left(0, \sigma^{\mathbf{g}(\mathbf{k})}\right); \mathbf{k} = 1, \dots, \mathbf{K}; 1 \le \mathbf{i} < \mathbf{j} \le \mathbf{n} \tag{13}$$


Finally, we take the following prior distributions for the parameters of the normal-gamma model:

$$\mu^{(\mathbf{g})} = \left(\mu\_1^{(\mathbf{g})}, \dots, \mu\_{n-1}^{(\mathbf{g})}\right)' | \mathbf{r}^{(\mathbf{g})} \sim \mathcal{N}\_{n-1} \left(0, \frac{1}{c\_0 \mathbf{r}^{(\mathbf{g})}} \mathbf{I}\_{n-1}\right) \text{ with } c\_0 > 0 \tag{14}$$

$$\pi^{(\mathbf{g})} = \frac{1}{\sigma^{2(\mathbf{g})}} \sim \text{Gamma}\left(\frac{\mathbf{n}\_0}{2}, \frac{\mathbf{n}\_0 \mathbf{s}\_0^2}{2}\right) \tag{15}$$

independents for g = 1, . . . , m.

### *3.2. Goodness of Fit Evaluation of G*

The selection of the best *G* partitions is made by an evaluation of their adjustment to the **Y** issued judgments. We use [**Y**|*G*], the prior marginal density of the model (13)–(15), which is one of the tools utilised in the Bayesian inference to quantify it, so that, the higher the value, the greater the degree of fit of *G* as a description of the existing opinions in **D** and the greater is the explanatory power of the **Y** issued judgments.

This density is given by:

$$\begin{aligned} \left[\mathbf{Y}|\mathbf{g}\right] &= \prod\_{\mathbf{g}=1}^{\mathrm{m}} \left[\left\{\mathbf{y}^{(\mathbf{k})};\mathbf{k}:\mathbf{g}(\mathbf{k})=\mathbf{g}\right\}|\mathbf{G}\right] \\ \mathbf{f} &= \prod\_{\mathbf{g}=1}^{\mathrm{m}} \int\_{0}^{\infty} \int\_{\mathbf{R}^{n-1}} \left[\left\{\mathbf{y}^{(\mathbf{k})};\mathbf{k}:\mathbf{g}(\mathbf{k})=\mathbf{g}\right\}|\mathbf{n}^{(\mathbf{g})},\mathbf{\tau}^{(\mathbf{g})}\right] \left[\mathbf{n}^{(\mathbf{g})}|\mathbf{\tau}^{(\mathbf{g})}\right] \left[\mathbf{\tau}^{(\mathbf{g})}\right] \mathrm{d}\mathbf{n}^{(\mathbf{g})} \mathrm{d}\mathbf{\tau}^{(\mathbf{g})} = \\ &= \prod\_{\mathbf{g}=1}^{\mathrm{m}} \int\_{0}^{\infty} \left[\left\{\mathbf{y}^{(\mathbf{k})};\mathbf{k}:\mathbf{g}(\mathbf{k})=\mathbf{g}\right\}|\mathbf{\tau}^{(\mathbf{g})}\right] \mathrm{d}\mathbf{\tau}^{(\mathbf{g})} \end{aligned} \tag{16}$$

Taking into account that:

$$\mathbb{E}\left[\left\{\mathbf{y}^{(\mathbf{k})};\mathbf{k}:\mathbf{g}(\mathbf{k})=\mathbf{g}\right\}|\mathsf{r}^{(\mathbf{g})}\right] = \frac{\left[\left\{\mathbf{y}^{(\mathbf{k})};\mathbf{k}:\mathbf{g}(\mathbf{k})=\mathbf{g}\right\},\mathsf{r}^{(\mathbf{g})},\mathsf{r}^{(\mathbf{g})}\right]}{\left[\mathfrak{n}^{(\mathbf{g})}\Big|\{\mathbf{y}^{(\mathbf{k})};\mathbf{k}:\mathbf{g}(\mathbf{k})=\mathbf{g}\},\mathsf{r}^{(\mathbf{g})}\right]}$$

from (6), (13), (14) and (15) it follows that

%**y**(k);k: g(k) = gτ(g)& = (τ(g)) Jng<sup>+</sup><sup>n</sup>−1+n0 2 −1 exp9− τ(g) 2 9n0s20<sup>+</sup> ∑ k:g(k)=g (**y**(k)−**X**μ(g)) (**y**(k)−**X**μ(g))+c0(μ(g) <sup>μ</sup>(g))::IRn−<sup>1</sup> (μ(g))<sup>I</sup>(0,∞)(τ(g)) (τ(g)) <sup>n</sup>−1 2 |(ng(**<sup>X</sup> <sup>X</sup>**)+c0In−<sup>1</sup>)| 12 exp'− τ(g) 2 (μ(g)−**<sup>m</sup>**(g)) (ng(**<sup>X</sup> <sup>X</sup>**)+c0In−<sup>1</sup>)(μ(g)−**<sup>m</sup>**(g))(IRn−<sup>1</sup> (μ(g)) x (<sup>2</sup>π)<sup>−</sup> Jng<sup>+</sup><sup>n</sup>−<sup>1</sup> 2 n0s20 2 n0 2 c <sup>n</sup>−1 2 0 (<sup>2</sup>π)<sup>−</sup> <sup>n</sup>−1 2 Γ( n02 ) (17)

where **m**(G) = ng-**X X** + c0In−<sup>1</sup>−<sup>1</sup>**<sup>X</sup>** ∑ k:g(k)=g **<sup>y</sup>**(k)!!. It follows that %**y**(k);k: g(k) = gτ(g)& = (τ(g)) Jng<sup>+</sup>n0 2 −1 exp'− τ(g) 2 %n0s20+(μ(g)−**<sup>m</sup>**(g)) (ng(**<sup>X</sup> <sup>X</sup>**)+c0In−<sup>1</sup>)(μ(g)−**<sup>m</sup>**(g))&( |(ng(**<sup>X</sup> <sup>X</sup>**)+c0In−<sup>1</sup>)| 1 2 exp'− τ(g) 2 (μ(g)−**<sup>m</sup>**(g)) (ng(**<sup>X</sup> <sup>X</sup>**)+c0In−<sup>1</sup>)(μ(g)−**<sup>m</sup>**(g))( x x exp9−<sup>τ</sup>(g) 2 ∑ k:g(k)=g **y**(k) **y**(k) − **m**(g) ng-**X X** + c0In−<sup>1</sup>**m**(g) :(<sup>2</sup>π)<sup>−</sup> Jng2 n0s20 2 n0 2 c <sup>n</sup>−1 2 0 Γ( n02 ) <sup>I</sup>(0,∞)τ(g) = n0s20 2 n0 2 (τ(g)) n0<sup>+</sup>Jng 2 −1 (<sup>2</sup>π) Jng 2 Γ( n02 ) ngc0 (**<sup>X</sup> <sup>X</sup>**)+In−<sup>1</sup> 12 exp%−<sup>τ</sup>(g) 2 <sup>Q</sup>(g)&<sup>I</sup>(0,∞)τ(g) (18)

where Q(g) = n0s20 + ∑ k:g(k)=g **y**(k) **y**(k) − **m**(g) ng-**X X** + c0In−<sup>1</sup>**m**(g). Substituting in (16) it follows that:

[**Y**|*G* ] = n0s20 2 n0 2 (<sup>2</sup>π) Jng 2 Γ( n02 ) ngc0 (**<sup>X</sup> <sup>X</sup>**)+In−<sup>1</sup> 12 m ∏ g=1 < ∞0 τ(g) n0<sup>+</sup>Jng 2 −1 exp%−<sup>τ</sup>(g) 2 <sup>Q</sup>(g)&<sup>d</sup>τ(g) = n0s20 2 m n0 2 (Γ( n02 ))m m ∏ g=1 Γn0<sup>+</sup>Jng 2 ngc0 -**X X** + In−<sup>1</sup> <sup>−</sup> 12 2 Q(g) n0<sup>+</sup>Jng 2 ∝ ∝ n0s20 2 m n0 2 (Γ( n02 ))m m ∏ g=1 Γ n0<sup>+</sup>Jng 2 ngc0 (**<sup>X</sup> <sup>X</sup>**)+In−<sup>1</sup> <sup>−</sup> 12 (Q(g)) n0<sup>+</sup>Jng 2 (19)

### *3.3. Location of Opinion Groups*

Now that the evaluation of the adjustment of a *G* partition to the issued judgements is completed, in this section we describe the process followed to determine the most representative partitions. We use Bayesian selection models and the Bayes factor, [**Y**|*G* ] [**Y**|*G* ] as a tool of comparison of two elements *G* and *G* of ℘(**D**), the set of possible partitions of **D**.

We set a threshold β (0 < β < 1) to discriminate if there are significant differences in the data adjustment of the partitions *G* and *G* so that if [**Y**|*G* ] [**Y**|*G* ] < β then the degree of fit of *G* is significantly worse than that of *G* and, therefore, *G* is more representative than *G* . In this case, and in line with [9], we take β = 0.05.

The problem is to determine *G* ∈ ℘(**D**) so that:

$$\frac{\lfloor \Upsilon | \mathcal{G} \rfloor}{\lfloor \Upsilon | \mathcal{G}\_{\max} \rfloor} \ge \beta \tag{20}$$

where [**Y**|*G*max ] = max*G*∈℘(**D**)[**Y**|*<sup>G</sup>* ] and gives us the 'Occam's window' of our problem [10]. The partitions could be taken as starting points for subsequent negotiation processes in order to reach an agreemen<sup>t</sup> among the decision makers that is as representative as possible. In our case, we look for the partitions of the window that have the least number of groups, since it can be foreseen that the fewer groups there will be, the easier it will be to reach more representative agreements because there are fewer disparate opinions. In order to do this, we use an exhaustive search algorithm that calculates the values of [**Y**|*G*] for all the elements of ℘(**D**) using expression (19). Then *G*max is determined and, from this, the partitions of Occam's window that verify (20) are identified. Other methods for consensus searching in group decision making can be seen in references [11–16].

Figure 1 shows the main steps for determining the groups with homogeneous opinions.

**Figure 1.** Steps of the proposed methodology.
