*7.1. Synaptic Weight Analysis*

Figure 1 demonstrates that the optimal 2SATRA model requires pre-processing structure for neurons before the *Qbest* can be learned by HNN. The currently available 2SATRA model specifically optimizes the logic extraction from the dataset without considering the optimal *Qbest*. Hence, the mechanism that optimizes the optimal neuron relationship before the learning can occur remains unclear. Identifying a specific pair of neurons for *Q*2*SAT* will facilitate the logic mining to obtain the optimal induced logic.

Figures 2–13 demonstrate the synaptic weight for all logic mining models in extracting logical information for F1–F12. Note that *W*(1) *<sup>i</sup>* and *<sup>W</sup>*(2) *ij* represent the first- and secondorder connection in the *C*(2) *<sup>i</sup>* clause. In this section, we will check the optimality of the synaptic weight with respect to the obtained accuracy value. Several interesting points can be made from Figures 2–13.


attribute representation, S2SATRA is able to achieve higher accuracy. A similar observation is made for other neurons from A to E. This implies the need of the optimal attribute selection before learning of HNN can take place.
