*5.3. Parameter Setting*

The parameter values used by each of the techniques mentioned in Section 4 are defined here.

As explained above, the number of periods used to train the ANN for each stock is 60. The only hidden layer uses sixteen neurons. We observed in preliminary experiments that the ANN showed more efficiency when it uses the same number of neurons as inputs; the more neurons, the more unstable the ANN was and the fewer neurons, the less predictive capacity the ANN had. Each neuron of the ANN used the sigmoidal function as the activation function. The ANN was run *na* = 50 times to train the ANN for each stock; finally, the ANN model with fewer testing errors was used to predict the return at time *t* + 1.

Regarding the selection of stocks, the DE defined to select the factor weights that maximize the objective function shown in Equation (3) uses common parameter values. The crossover probability was set to 0.9; the differential weight was set to 0.8; the population size was set to 200; the number of iterations was set to 100. After scoring and ranking the stocks, we only select the top 5% of all the stocks originally considered following the recommendations in [10].

Finally, the genetic algorithm used to address Problem (5) was described in detail in ([12]), where it was based on the well-known MOEA/D and adapted to deal with parameter values defined as interval numbers. We use one hundred generations as the stopping criterion, two solutions as the maximum number of solutions replaced by each child solution, a probability of selecting parents only from the neighborhood (instead of the whole population) of 0.9, one hundred subproblems, and twenty weight vectors in the neighborhood of each weight vector. Two confidence intervals are considered by MOEA/D as objectives to be maximized (see Equation (5)): *θβ*30 (*x*) and *θβ*50 (*x*) according to the recommendations in ([12]). The constraints considered by MOEA/D are *xi* ≥ 0 and ∑ *xi* = 1.

It is worth mentioning that the code for implementing the algorithms described here are original developments of the authors. The code was written in Matlab and Java and will be probably publicly presented in the form of a complete software system.
