**4. Proposed Solutions**

The optimization problems are solved using different methods: binary PSO (BPSO) and GA to solve (9), while PSO, SPV and VNS are used to solve (13). In this section we provide a detailed explanation of how these techniques are applied in this context.

#### *4.1. Binary Particle Swarm Optimization*

The BPSO was first described by Kennedy and Eberhart [23] and was an adaptation of the original (continuous) PSO. This algorithm works based on the behaviour of birds in search of foods, and the each candidate solution is presented as a bird in a flock flying through the search space (optimization problem domain), and the found represents the optimal solution. At each algorithm iteration, the velocity of each particle is updated through the equation:

$$\mathbf{v}\_{i}[t+1] = \omega \left. \mathbf{v}\_{i}[t] + c\_{1}r\_{1} \left( \mathbf{x}\_{i}[t] - \mathbf{p}\_{i} \right) + c\_{2}r\_{2} \left( \mathbf{x}\_{i}[t] - \mathbf{p}\_{\mathcal{X}} \right) \tag{16}$$

where *ω* is the inertia weight, **<sup>v</sup>***i* is the candidate *i* velocity, **x***i* is the particle position in the search space, **p***i* is the best position candidate *i* has ever been to in terms of fitness function, and **p***g* is the best position overall the whole population, i.e., the first keeps an individual best position record and the latter, a global best position record. The coefficients *c*1 and *c*2 are the cognitive and social acceleration constants, respectively, whilst *r*1 and *r*2 are uniformly distributed random numbers in the interval [0, 1].

In a classical PSO algorithm, the next step would be updating each candidate position using its velocity. However, to implement a binary version, the change in each candidate dimensions must also be either zero or one. Therefore, a sigmoid function is applied to each velocity component, and it is equivalent to the probability of changing each bit. In this paper, we use the following sigmoid function in each dimension **<sup>v</sup>***i*:

$$\mathcal{S}(v\_i[t]) = \frac{1}{1 + e^{-v\_i[t]}} \tag{17}$$

The sigmoid function value is then compared to a random value generated to each dimension of the candidate **x***i* such that:

$$\mathbf{x}\_{i}[t+1] = \begin{cases} 1 & \text{if } r < \mathcal{S}(v\_{i}[t]) \\ 0 & \text{otherwise} \end{cases} \tag{18}$$

where *r* ∼ U(0, <sup>1</sup>), i.e., *r* is a random variable with uniform distribution over the interval (0, <sup>1</sup>).

Although the position possibilities are constrained to {0, <sup>1</sup>}, particle velocity may grow indefinitely. Hence, a maximum velocity constraint may be applied to each dimension, i.e.,:

$$v\_i[t+1] = \begin{cases} v\_{\text{max}} & \text{if } v\_i[t+1] > v\_{\text{max}} \\ -v\_{\text{max}} & \text{if } v\_i[t+1] < -v\_{\text{max}} \end{cases} \tag{19}$$

Furthermore, since OP1 in (9) is a constrained optimization problem, we present two alternatives to treat the unfeasible solution candidates. The first is simply discarding the unfeasible candidates and replacing them with a random feasible one; the second is using a new fitness function defined as:

$$\mathcal{J}\_1(\Phi) = \begin{cases} \mathcal{J}\_1(\Phi) & \text{if feasible} \\ \mathcal{J}\_{\text{min}} - \mathcal{P} & \text{if unffeasible} \end{cases} \tag{20}$$

where Jmin is the worst feasible solution in the PSO population, and P is the sum of the constraints in Equations (10) and (11), i.e.,:

$$\begin{array}{rcl} \mathcal{P} &=& \sum\_{\ell=1}^{L} \sum\_{k=1}^{K\_{\ell}} \left( \sum\_{q=1}^{T\_p} \phi\_{k,q,\ell} - 1 \right) + \\ & \sum\_{\ell=1}^{L} \sum\_{q=1}^{T\_p} \left( \sum\_{k=1}^{K\_{\ell}} \phi\_{k,q,\ell} - 1 \right) \end{array} \tag{21}$$

Finally, a pseudocode of the BPSO algorithm is presented in Algorithm 1. **Algorithm 1:** BPSO


#### *4.2. PSO-Smallest Position Value*

To solve the optimization problem (13), we used the Smallest Position Value method along with the PSO algorithm (PSO–SPV). In the SPV, the value of each problem dimension is exchanged by the index of the sorted values. In this specific problem, it is equivalent to the SPV value designating the pilot sequence allocated to each user. The sorting process which is responsible for swapping the real values for their ordinal ones is performed separately for each cell in the system. To illustrate the application of the SPV method, we present Table 1 which relates the value of each PSO particle (Real Value) to the pilot sequence assignment when four dimensions (users) are accounted for in a single cell.

**Table 1.** Example of operation of SPV in a single cell, *K* = 4 users scenario.


Using an SPV changes the problem domain from a binary scenario to integers values such that *<sup>θ</sup>k*,- = *x* where *k* is the dimension of the SPV scheme and *x* is the SPV value, while - designates the cell of interest.

The PSO is used along with the SPV through the application of Equation (16) and the update position equation:

$$\mathbf{x}\_{i}[t+1] = \mathbf{x}\_{i}[t] + \mathbf{v}\_{i}[t+1] \tag{22}$$

It is important to point out that the velocity limits defined by Equation (19) are also applied to the PSO–SPV solution. Furthermore, from the nature of this approach, it is unlikely that two users will have the same pilot sequence assigned, or the other way around, since the sorting process would only place two users in the same position if their real values are exactly the same. To avoid this situation, in case two dimensions of an individual present the exact same real value, the tie break is the user index: the smallest user indexer is sorted as the smallest value. Due to these two facts, the PSO–SPV fitness function is the objective function presented in (9), which means that the BPSO and PSO–SPV use different objective functions although the algorithms objective is the same: maximizing the system spectral efficiency.

#### *4.3. Variable Neighbourhood Search*

Another technique we also used to make the PSO–SPV approach even more robust is the Variable Neighbourhood Search metaheuristic which is based on the systemic change of neighborhoods of each possible solution in an attempt to find better solutions near the PSO candidate solutions.

VNS is applied to the PSO–SPV through an exchange of sequences allocated to user pairs on the same cell, i.e., users are randomly paired, and their pilot sequences are swapped whenever VNS is applied at each iteration of the PSO–SPV algorithm. Mathematically for each pair *k* e *k* of users:

$$\begin{aligned} \theta\_{k',\ell} &= q' & \to & \theta\_{k',\ell} = q''\\ \theta\_{k'',\ell} &= q'' & \to & \theta\_{k'',\ell} = q' \end{aligned} \tag{23}$$

where *q* and *q* are the assigned pilot sequences. It is worth noting that when the number of the users is even, one of the users is left out of the VNS subroutine, and its pilot sequence remains unchanged.

To avoid a large increase in complexity, this mechanism is not performed for all individuals of the PSO–SPV population. In fact, we define *r*vns as the probability of running VNS for each candidate solution in each one of the PSO–SPV iterations. The best balance between complexity and solution quality for the *r*vns parameter is discussed in the results section.

Finally, the PSO–SPV and PSO–SPV–VNS algorithms are presented in the Algorithm 2.

