3.2.3. Decomposition Method

We represent the probability of encounter and social relationship values between the nodes described above in a matrix. Wherein, let *M* = {*mij*} denote the matrix of *n* × *n*; that is, the encounter probability matrix. For a pair of nodes, *ni* and *nj*, *mij* ∈ [0, 1] denote the historical encounter probability of node *i* to node *j*. Let *S* = {*sij*} denote the social relationship matrix of *n* × *n*. Two nodes with strong social relationships affect each other's probability of encountering the same node. We also believe that nodes are more willing to be close to the nodes with high social relations. Because the devices in the opportunistic network are mostly carried by people, it is significant to analyze the probability of node encounters through the social relations of nodes. It is assumed that node, *na*, knows nothing about a node, *nc*, and it meets node, *nb*, and node, *nc*, with a high relation value of node, *na*. So, node, *na*, meets it because it is close to node, *nb*, and then node, *na*, and node, *nc*, also meet. From the above analysis, we can summarize the above social process as:

$$
\tilde{M}\_{i\bar{j}} = \frac{\sum\_{z \in \kappa(i)} M\_{z\bar{j}} S\_{i\bar{z}}}{|\kappa(i)|} \tag{6}
$$

where *M ij* is the prediction of the probability that user *ui* meets user *uj*, *Mij* is the probability that user *ui* meets user *uj*, *κ*(*i*) is the neighbor's set that user *ui* relations and |*κ*(*i*)| is the number of related users of user *ui* in the set, *<sup>κ</sup>*(*i*). |*κ*(*i*)| can be merged into *Sij*, since it is the normalization term of the relation evaluation of the estimate.

Then, we can infer the prediction of the probability that user *ui* for all users is as follows:

$$
\begin{pmatrix}
\tilde{M}\_{l1} \\
\tilde{M}\_{l2} \\
\vdots \\
\tilde{M}\_{\text{in}}
\end{pmatrix} = \begin{pmatrix}
\begin{array}{cccc}
\tilde{M}\_{l1} & & & & & \\
& \mathbf{S}\_{l1} & \mathbf{S}\_{l2} & \cdots & \mathbf{S}\_{\text{in}}
\end{array}
\end{pmatrix} \begin{pmatrix}
\begin{array}{cccc}
M\_{l1} & & \mathbf{M}\_{l2} & \cdots & \mathbf{M}\_{\text{in}} \\
\mathbf{M}\_{l2} & \mathbf{M}\_{l2} & \cdots & \mathbf{M}\_{\text{in}}
\end{array} \\
\begin{array}{cccc}
\vdots & \vdots & \ddots & \vdots \\
\mathbf{M}\_{l1} & \mathbf{M}\_{l2} & \cdots & \mathbf{M}\_{\text{nn}}
\end{array}
\end{pmatrix} \tag{7}
$$

Consequently, for all users, we can infer as:

$$
\hat{M} = M^T S \tag{8}
$$

where *SM* can be interpreted as the probability forecasting purely based on the user relevance.

The idea of the probability matrix decomposition is to derive the *l*-dimensional features of the high-quality representative node (or user), *N*, of the node (or user) based on the analysis of the probability values of the encounter between the nodes. Let *N* ∈ *Ml*×*n* and *X* ∈ *Ml*×*n* be the latent nodes and social factors. The feature matrix, the column vectors, *Ni* and *Xj*, represent the potential feature vectors of the node and social factor, respectively. We define the conditional distribution of the observed encounter probability as:

$$p(M|N,X,\mathcal{S},\sigma\_M^2) = \prod\_{i=1}^n \prod\_{j=1}^n \mathcal{N}\left[ (m\_{ij}|\mathcal{g}(\sum\_{z \in \kappa(i)} \mathcal{S}\_{iz}N\_z^TX\_j), \sigma\_M^2) \right]^{1\_{ij}^M} \tag{9}$$

where N (*x*|*<sup>u</sup>*, *σ*<sup>2</sup>) is the probability density function of the Gaussian distribution with a mean of *u* and a variance of *σ*2, and *IMij* is an exponential function. If node *i* meets node *j*, it is equal to 1, otherwise it is equal to 0. The function, *g*(*x*) = 1/(1 + exp(−*<sup>x</sup>*)), is a logarithmic function, which allows the function value corresponding to *NTi Xj* to fall in the interval [0, 1]. We use zero-mean spherical Gaussian priors [24,25] on the node and social factor feature vectors:

$$p(N, \sigma\_N^2) = \prod\_{i}^n \mathcal{N}(N\_i | 0, \sigma\_N^2 \mathbf{I}) p(X, \sigma\_X^2) = \prod\_{j}^n \mathcal{N}(X\_i | 0, \sigma\_X^2 \mathbf{I}) \tag{10}$$

So, through the node and social factors features with a simple Bayesian inference [26] in Formulas (9) and (10), we have:

$$\begin{array}{l} p(N, X | M, S, \sigma\_{M}^{2}, \sigma\_{N}^{2}, \sigma\_{X}^{2}) \\ \lnot p(M | S, N, X, \sigma\_{M}^{2}) p(N | \sigma\_{N}^{2}) p(X | \sigma\_{X}^{2}) \\ \lnot \prod\_{i=1}^{n} \prod\_{j=1}^{n} \mathcal{N}[(m\_{ij} | \lg(\sum\_{z \in \mathbf{x}(i)} S\_{iz} N\_{z}^{T} X\_{j}), \sigma\_{M}^{2}]^{\mathbf{i}\_{ij}^{M}} \\ \lnot \prod\_{i=1}^{n} \mathcal{N}(N\_{i} | 0, \sigma\_{N}^{2} \mathbf{I}) \times \prod\_{j=1}^{n} \mathcal{N}(X\_{j} | 0, \sigma\_{X}^{2} \mathbf{I}) \end{array} \tag{11}$$

We can assume that *S* is independent with the low-dimensional matrices, *N* and *X*, then this Equation (11) can be changed to:

$$\begin{array}{l} p(N\_{\boldsymbol{\cdot}}\boldsymbol{\mathcal{X}}|\boldsymbol{M},\boldsymbol{\mathcal{S}},\sigma^{2}\_{\boldsymbol{M}},\sigma^{2}\_{\boldsymbol{M}},\sigma^{2}\_{\boldsymbol{\cdot}})\\ \stackrel{\scriptstyle\mathcal{U}}{=} \prod\_{i=1}^{n} \prod\_{j=1}^{n} \mathcal{N}[(\boldsymbol{m}\_{ij}|\boldsymbol{\mathcal{g}}(\boldsymbol{\lambda}\boldsymbol{\mathcal{N}}^{T}\_{i}\boldsymbol{X}\_{j}+(1-\boldsymbol{\lambda})\sum\_{z\in\mathbf{x}(i)}\boldsymbol{S}\_{iz}\boldsymbol{N}^{T}\_{z}\boldsymbol{X}\_{j}),\sigma^{2}\_{\boldsymbol{M}}]^{\mathbf{1}^{\mathcal{M}}\_{ij}}\\ \stackrel{\scriptstyle\mathcal{U}}{\times} \prod\_{i=1}^{n} \mathcal{N}(\mathcal{N}\_{i}|\boldsymbol{0},\sigma^{2}\_{\boldsymbol{\mathcal{N}}}\mathbf{I}) \times \prod\_{j=1}^{n} \mathcal{N}(\mathcal{X}\_{j}|\boldsymbol{0},\sigma^{2}\_{\boldsymbol{\mathcal{X}}}\mathbf{I})\\ \end{array} \tag{12}$$

### *3.3. Cooperation Probability and Energy Decomposition*

### 3.3.1. Cooperative Probability Calculation
