6.1.3. Baseline Methods

Since the main focus of this paper is to examine the performance of the induced logic produced by S2SATRA, we limit our comparison to only method that produce induced logic. Despite the fact that we respect the capability of the existing model in classifying the dataset, we will not compare S2SATRA with the existing classification model, such as random forest, decision tree, etc., because these models do not produce any logical rule that classifies the dataset. For consistency purposes, all the experiments will employ the same type of logical rule, i.e., *Q*2*SAT*. For comparison purposes, the proposed S2SATRA will be compared with all the existing logic mining models, such as 2SATRA [20], the energy-based 2-satisfiability reverse analysis method (E2SATRA) [22], the 2-satisfiability reverse analysis method with permutation element (P2SATRA) [30], and the state-of-the-art reverse analysis method (RA) [14]. This section will discuss the implementation of each logic mining models.


comparable with our proposed method, calibration is required. The main calibration from the previous RA is the number of *Q<sup>B</sup> <sup>i</sup>* produced by the datasets. Instead of assigning neuron for each instance, we assign each neuron with attributes. The neuron redundancy is also introduced to avoid the net-zero effect of the synaptic weight.

During the learning phase, learning optimization is implemented to ensure that the synaptic weight obtained is purely due to the HNN. Note that the effective synaptic weight management will change the final state of HNN, leading to different *Q<sup>B</sup> <sup>i</sup>* . Since the HNN has a recurrent learning property [33], the neuron will change states until *Qlearn <sup>i</sup>* = 1 and until the learning threshold *NH* is reached. According to [14], if the learning of *Q*2*SAT* exceeds the proposed *NH*, the HNN will use the current optimal synaptic weight for the retrieval phase. During the retrieval phase of HNN, the neuron state will be initially randomized to reduce the possible bias. Noise function is not added, such as in [22,31], because the main objective of this experiment is to investigate the type of attributes that retrieve the most optimal final *Q<sup>B</sup> <sup>i</sup>* . To obtain consistent results throughout all 2SATRA models, the only squashing function employed by the neurons in 2SATRA models is the hyperbolic activation function in [38]. By considering only one fixed learning rule, we can examine the effect of supervised learning towards the 2SATRA model. Tables 1–5 illustrate the list of parameters involved in the experiment.

**Table 2.** List of parameters in S2SATRA.


**Table 3.** List of parameters in E2SATRA [22].


**Table 4.** List of parameters in 2SATRA [20].



**Table 5.** List of parameters in P2SATRA [30].
