Next Article in Journal
Solvability of Fuzzy Partially Differentiable Models for Caputo–Hadamard-Type Goursat Problems Involving Generalized Hukuhara Difference
Previous Article in Journal
Proposal for the Application of Fractional Operators in Polynomial Regression Models to Enhance the Determination Coefficient R2 on Unseen Data
Previous Article in Special Issue
DGGNets: Deep Gradient-Guidance Networks for Speckle Noise Reduction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stochastic Fractal Search for Bayesian Network Structure Learning Under Soft/Hard Constraints

School of Electronic and Information, Northwestern Polytechnical University, Xi’an 710129, China
*
Author to whom correspondence should be addressed.
Fractal Fract. 2025, 9(6), 394; https://doi.org/10.3390/fractalfract9060394
Submission received: 5 May 2025 / Revised: 6 June 2025 / Accepted: 18 June 2025 / Published: 19 June 2025

Abstract

:
A Bayesian network (BN) is an uncertainty processing model that simulates human cognitive thinking on the basis of probability theory and graph theory. Its network topology is a directed acyclic graph (DAG) that can be manually constructed through expert knowledge or automatically generated through data learning. However, the acquisition of expert knowledge faces problems such as excessively high labor costs, limited expert experience, and the inability to solve large-scale and highly complex DAGs. Moreover, the current data learning methods also face the problems of low computational efficiency or decreased accuracy when dealing with large-scale and highly complex DAGs. Therefore, we consider mining fragmented knowledge from the data to alleviate the bottleneck problem of expert knowledge acquisition. This generated fragmented knowledge can compensate for the limitations of data learning methods. In our work, we propose a new binary stochastic fractal search (SFS) algorithm to learn DAGs. Moreover, a new feature selection (FS) method is proposed to mine fragmented knowledge. This fragmented prior knowledge serves as a soft constraint, and the acquired expert knowledge serves as a hard constraint. The combination of the two serves as guidance and constraints to enhance the performance of the proposed SFS algorithm. Extensive experimental analysis reveals that our proposed method is more robust and accurate than other algorithms.

1. Introduction

A Bayesian network is a probabilistic graphical model used for knowledge representation and reasoning, mainly to represent and reason about the conditional dependencies among random variables. It represents the causal relationship between random variables through a directed acyclic graph and calculates the probability values of these relationships in combination with the probability distribution. This approach is widely used in many fields, including medical diagnosis [1,2], fault diagnosis [3], and environmental protection [4].
Currently, BN learning methods can be categorized into global structure learning and local structure learning. The former can be further divided into three main types: constraint-based approaches, score-based approaches, and hybrid approaches. Constraint-based approaches are based on conditional independence (CI) tests to identify causal relationships between variables. The early classical constraint-based methods were SGS and TPDA [5]. However, for a Bayesian network with n nodes, the two algorithms require O ( 2 n ) and O ( n 4 ) CI tests in the worst case. To reduce the computational complexity of the algorithm, the PC [6] algorithm and its improved version, the PC-Stable [7] algorithm, are proposed, among which the latter is the most commonly used algorithm and one of the comparison algorithms in this article. However, these methods of building BNs are all based on causal faithfulness and causal sufficiency. Therefore, when the sample size is insufficient, the accuracy of the CI test cannot be guaranteed, and the accuracy of the algorithm is greatly reduced.
The score-based approach treats the learning problem of the BN structure as a model selection problem and uses a score function and a search strategy to find the structure with the highest score in the search space. Common scoring functions include K2, the Bayesian information criterion (BIC), and the minimal description length (MDL), and common search spaces comprise the DAG space, the equivalent class (EC) space [8], and the ordering space. Search strategies can be divided into two main categories: exact search and approximate search. Exact search algorithms, such as the B&B [9] algorithm and the A* [10] and ILP [11] algorithms, cannot learn large-scale Bayesian network structures, and approximate learning algorithms use heuristic methods to improve search efficiency, which is a common method for large-scale BN learning. Hill climbing (HC) [5] and K2 [12], which are based on greedy search, were first proposed. Subsequently, a series of meta-heuristic algorithms, such as the genetic algorithm (GA) [13], evolutionary programming [14], ant colony optimization (ACO) [15], cuckoo optimization (CO) [16], water cycle optimization (WCO) [17], particle swarm optimization (PSO) [18,19], artificial bee colony algorithm (ABC) [20], bacterial foraging optimization (BFO) [21], and firefly algorithm (FA) [22], have been proposed to improve search efficiency and escape local optima. Among these meta-heuristic algorithms, the particle swarm optimization (PSO) algorithm is the most widely used and is used as a comparison algorithm in our experiments. Although these meta-heuristic algorithms can efficiently explore the search space, the following two challenges remain in the face of large-scale and highly complex DAGs:
  • An overly large search space will lead to low search efficiency.
  • It is prone to fall into a local optimum, resulting in a decrease in the accuracy of the final graph.
Hybrid approaches combine constraint-based and score-based approaches, attempting to integrate the advantages of both. It adopts the former to limit the search space of the latter algorithm. The classic hybrid approach is the max–min hill-climbing algorithm (MMHC) [23], which is also one of the comparison algorithms featured in this paper. In recent years, feature selection has also been introduced into BN structure learning, such as the F2SL [24] algorithm, which is also a contrast algorithm. It first determines the skeleton of the DAG through feature selection and then determines the direction of the edges on the basis of the CI test or scoring functions.
The goal of local structure learning is to find local structures in the form of parent–child nodes (PCs) or Markov blankets (MBs) of target variables. The existing MB learning algorithms can be divided into two main types: direct learning strategies and divide-and-conquer learning strategies. The direct learning strategy directly searches for MB variables on the basis of the statistical characteristics of MB’s conditional independence without distinguishing between PC and spouse variables. For the direct learning method, GS [25] is the first theoretically correct Markov boundary algorithm and consists of two phases: growth and shrinkage. However, the heuristic method used is not efficient, and the IAMB [26] algorithm proposed later adopts a dynamic heuristic method to make the algorithm more effective in selecting the candidate node set. In addition, the IAMB variants Inter-IAMB and Fast-IAMB alternately execute the growth process and shrinkage process in the IAMB algorithm and delete the false features in the MB set in time, thereby improving the accuracy of the CI test in the later stage of the algorithm operation. However, the GS, IAMB, and variants of the IAMB use the set of all currently selected features as the condition set, and the number of data samples required for the test is exponentially related to the size of the MB. The direct learning method has more advantages in terms of efficiency, but its accuracy for high-dimensional data is not ideal. The divide-and-conquer learning strategy utilizes the direct causal relationship between parent and child variables and the target variable to learn PC variables and spouse variables, respectively. It is usually superior to the direct learning method in terms of search accuracy and data utilization efficiency.
The first MB learning algorithm, which is based on the divide-and-conquer strategy, is the MMMB [27] algorithm, which identifies spouse variables by identifying the V-structure. The HITON-MB [28] algorithm and the semi-interleaved HITON-MB algorithm are improved versions of the MMMB algorithm. However, the MMMB algorithm and its variants are not correct in theory. The PCMB [29] algorithm is the first divide-and-conquer method that can be proven correct in theory. This algorithm innovatively introduces the symmetry test based on “and-rule”. However, the symmetry test step increases the time complexity of the algorithm, and the subsequently proposed STMB [30] algorithm alleviates this problem. To strike a balance between data usage efficiency and time efficiency, some algorithms, such as the BAMB [31] algorithm and the EEMB [32] algorithm, adopt the strategy of alternating PC learning and spouse learning. Some local structure learning algorithms, such as the PCD-by-PCD [33] and CMB [34] algorithms, can perform causal orientation on the basis of the separation set and Meek rules while conducting PC discovery. LCS-FS [35] is a local structure learning algorithm based on feature selection. When testing the causal relationship between two nodes, a feature selection method based on mutual information without the need for a condition set is adopted, significantly improving efficiency.
Each approach has its own limitations and advantages. How to more effectively combine the constraint-based method and the score-based method, and how to combine the global structure learning method and the local structure learning method to play the complementary role of the two, are the current problems that need to be solved in Bayesian network structure learning, and they also comprise the research content of this paper. In our work, the local structure learning method is used to mine prior knowledge, and the global structure learning method can compensate for its own limitations by integrating this prior knowledge. For the local structure learning algorithm, we consider designing a new feature selection method, which can improve the recall rate of the MB while avoiding high-order CI tests and enhancing the precision rate when identifying V-structures. For the global structure optimization algorithm, the purpose of our work is to develop a meta-heuristic algorithm based on knowledge fusion, which can use the obtained knowledge as constraints to enable the meta-heuristic to converge rapidly and improve accuracy.
The SFS [36] algorithm is a new type of meta-heuristic algorithm proposed in 2015 that has few parameters, fast convergence, and high accuracy. Its inspiration comes from the combination of the principles of fractal geometry in nature and the random search strategy. It effectively explores the solution space by utilizing the self-similarity and randomness of the fractal structure, avoiding local optima. At present, it has been widely applied to solve complex optimization problems in various fields, including power and energy [37], finance [38], image processing [39], and machine learning [40]. To our knowledge, these applications all focus on solving continuous optimization problems. Therefore, to introduce the SFS algorithm into the field of DAG learning, we propose a new binary SFS algorithm. The SFS algorithm consists of two main processes: the diffusion process and the update process. During the diffusion process, the method used to generate fractal shapes is random fractals, which can be generated by modifying random rules during the iterative process. For the DAG learning problem, we have redefined a new random walk strategy to solve the binary problem. During the update process, we link the strategy of updating positions among particles through information exchange with mutual learning among individuals in the DAG learning problem.
The main contributions of the paper are summarized as follows:
  • We propose a new local structure learning method that uses a combination of feature selection and CI testing to mine prior knowledge and identify partial edges.
  • We propose a binary SFS algorithm, which can integrate the obtained prior knowledge as soft/hard constraints to improve the search efficiency and accuracy.
Experiments prove that the performance of the local structure learning algorithm we propose in terms of searching the space and obtaining the structure prior is more suitable for soft constraint knowledge mining to solve the BN structure learning problem. Moreover, the joint cooperation of soft and hard constraints can improve the performance of the SFS algorithm.
The remainder of this article is organized as follows. Section 2 introduces the background and related concepts. Section 3 discusses the research design of this study. Section 4 discusses the experimental results and performance evaluation. Section 5 concludes this paper with some remarks and suggestions for future research.

2. Background

2.1. BN

A BN can be represented as a 2-tuple ( G , θ ) , where G represents a directed acyclic graph and where θ represents the parameters of the network. The directed acyclic graph G can be represented as a 2-tuple ( V , E ) , the elements of V = X 1 , , X n are called nodes, and the elements of E are called directed edges.

2.2. Information Theoretic Metrics

The entropy H ( X ) is a measure of the uncertainty of a variable X and is calculated via the following formula.
H ( X ) = x p ( x ) l o g p ( x )
The mutual information (MI) represents the amount of information shared between variables, and the mutual information of two random variables X and Y can be expressed as follows.
I ( X ; Y ) = x X y Y p ( x y ) l o g p ( x y ) p ( x ) p ( y )
Conditional mutual information (CMI) refers to the mutual information between two variables X and Y when the third variable Z is known, and its calculation formula is as follows.
I ( X ; Y | Z ) = z Z p ( z ) x X y Y p ( x y | z ) l o g p ( x y | z ) p ( x | z ) p ( y | z )

2.3. Scoring Function

In our work, BIC is employed to evaluate the scores of the network, which is calculated via the following formula:
B I C ( g , D ) = l o g p ( D ; θ ^ , g ) l o g m 2 d θ
where g represents the candidate structure, D represents the dataset, θ ^ represents the maximum likelihood estimator, m represents the sample size, and  d θ represents the dimensionality of the parameter θ .

3. Methodology

Our algorithm has two main parts. In the first part, we propose a new local structure learning algorithm. The core idea is to use feature selection and CI testing to obtain the MB of each node, merge the MB of all nodes to obtain the restricted search space, and identify the V-structure as the structure prior. The search space and structural priors obtained through data learning are taken as soft constraints [41], and the acquired expert knowledge is regarded as a hard constraint. Both can serve as guidance and constraints for the subsequent global structure learning algorithm. In the second part, we propose a binary stochastic fractal search algorithm for DAG learning and describe in detail the changes we made to the original stochastic fractal search algorithm. The combination of the two parts is the stochastic fractal search algorithm based on feature selection (FS-SFS) proposed by us for Bayesian network structure learning. We also explained how the acquired knowledge can be integrated into the search process of the SFS algorithm as soft and hard constraints.

3.1. Proposed Local Structure Learning Algorithm

Feature selection based on information theory is a common method for local MB discovery and has the advantage of avoiding higher-order CI tests. For example, the LCS-FS algorithm adopts the fast correlation-based filter (FCBF) method, which is a two-stage evaluation feature selection method that includes correlation evaluation and redundancy evaluation. In the evaluation of correlation, an information theory measurement method called symmetric uncertainty (SU) is adopted to score and rank the features, and its calculation formula is as follows.
S U ( X ; Y ) = 2 I ( X ; Y ) H ( X ) + H ( Y )
In redundancy evaluation, the FCBF method heuristically deletes the redundant features in the feature order arranged according to a certain rule on the basis of an approximate MB criterion. Although the operational efficiency of the FCBF method is quite high, the approximate MB criterion it uses does not consider the interaction information between features and has difficulty identifying the complex redundant relationships among multiple features. Therefore, it is prone to miss important feature combinations and retain excessive noise features. The interaction information matrix (IIM) is calculated as follows.
I I M ( X ; Y ; Z ) = I ( X ; Y ) I ( X ; Y | Z )
In the face of a complex network or a large number of network node states, the accuracy of the method based on the CI test will be greatly reduced or even invalid. Therefore, we still use the FCBF method to screen the feature subset as the approximate MB. When using the FCBF, we need to modify it to meet our expectations for soft constraint knowledge mining. The proposed local structure learning algorithm is divided into four steps: restricted search space, primary pruning, secondary pruning, and identification of partial V-structures. During the two pruning processes, we adopted the FCBF method both times, but the correction strategies adopted were different; these were named the FCBF1 and FCBF2 algorithms. The threshold for the chi-square test in the algorithm is uniformly set at 0.05. The framework of our algorithm is shown in Algorithm 1.
In the first step, we first test each node’s potential neighbor (PN) nodes with a 0-order chi-square test. By joining the neighbors of all the nodes, we can obtain an initial skeleton structure that can serve as the initial search space ( S S 1 ). For each edge in the initial search space, we calculate the mutual information between the corresponding two nodes. Note that we use the chi-square test instead of mutual information as a measure of the initial search space because the threshold of mutual information is not easy to determine.
Algorithm 1: WLBRC
Fractalfract 09 00394 i001
In the second step, we perform a preliminary pruning of the potential neighbor set of each node via the FCBF1 algorithm. To avoid premature and incorrect deletion of important features and ensure the recall rate of the search space, we use both the SU and the first-order chi-square test for judgment, and only when both are satisfied will we delete the corresponding edge. After pruning, we can obtain a narrower search space ( S S 2 ). The pseudocode of the FCBF1 algorithm is shown in Algorithm 2.
In the third step, we continue to use the FCBF2 algorithm and chi-square test to further prune the PN of each node. In the redundancy evaluation of the FCBF2 algorithm, we introduce the LBRC [42] method to correct the importance ranking. The LBRC algorithm consists of three stages: correlation evaluation, redundancy evaluation, and complementarity evaluation. Its calculation formula is as follows.
J L B R C ( F ) = I ( F ; C ) m a x I ( F ; F S ) + m a x I ( F ; F S | C )
where F S represents the selected subset of features and C represents the target category. In the correlation evaluation, since the V-structure is closely related to feature complementarity, we use the maximum weighted condition mutual information instead of the mutual information to calculate the S U . Similarly, I ( F ; C ) in the above formula is also replaced by the maximum weighted mutual information. To our knowledge, this is the first time that the three-stage feature selection method has been introduced into the field of DAG learning. We name the proposed local structure learning algorithm the WLBRC algorithm. The modified calculation formula of the weighted conditional mutual information is as follows.
W C M I ( X ; Y | Z ) = 1 I ( X ; Y ) I ( X ; Y | Z ) m i n ( H ( X ) , H ( Y ) , H ( Z ) )
In accordance with the LBRC algorithm, we greedily select the node with the maximum value in the LBRC algorithm and delete the node whose chi-square test is independent among the nodes to be selected. After the node is selected, we further try to find a set of conditions that make the node independent of the target node. We greedily select the node whose interaction information with the two nodes is the largest positive to enter the condition set until the chi-square test fails. We then treat nodes that meet the SU condition and do not find an independent set of conditions as neighbors of the target node. Finally, for the asymmetric branches in S S 3 , we adopt the OR rule to improve the recall rate of the search space and the AND rule to limit PN to increase the precision of the subsequent recognition of the V-structure. The FCBF2 algorithm is shown in Algorithm 3.
Algorithm 2: FCBF1
Fractalfract 09 00394 i002
In the fourth step, we identify the V-structure according to the complementarity between the features. For any node T, we find the pair of nodes X and Y with the least interaction information among its neighbors. If the minimum value of the interaction information is less than 0, we consider that there is a V-structure X T Y . Then, according to this V-structure, other neighbor nodes are distinguished as parent or child nodes. The specific pseudocode for determining orientation is shown in Algorithm 4.

3.2. Proposed Binary SFS Algorithm

In this section, we attempt to make reasonable changes to the two processes in the SFS algorithm to adapt to binary problems. For the diffusion process, the core idea is fractal search. Each particle randomly walks around it, and the best generated particle is retained while the rest is discarded. This process enhances the local development capability and can prevent falling into local optimum. For the update process, the core idea is to update the position through the communication between particles. This process can enhance the global exploration ability and reduce the convergence time.
Algorithm 3: FCBF2
Fractalfract 09 00394 i003
Algorithm 4: VD
Fractalfract 09 00394 i004

3.2.1. Diffusion Process

To simulate the diffusion process, the Gaussian random walk is most commonly used because of its advantage in local development capabilities. The following formula describes the commonly used Gaussian walk strategy.
G W 1 = Gaussian ( μ B P , σ ) + ( ϵ × B P ϵ × P i )
G W 2 = Gaussian ( μ P , σ )
σ = | l o g ( g ) g × ( P i B P ) |
where ϵ and ϵ are random numbers between 0 and 1. B P and P i represent the best position point and the position of the ith particle, respectively. The means of the two Gaussian distributions are equal to | B P | and | P i | , respectively, and the variance σ decreases dynamically with an increasing number of iterations g.
For the binary DAG learning problem, we adopt a new chemotactic walk strategy to replace the Gaussian random walk. In the chemotactic walk, we perform three operations in sequence: chemotactic edge addition, chemotactic edge deletion, and chemotactic edge reversal. Since the complete random walk will cause the search convergence speed to slow down, we consider that each particle randomly selects a node before the start of the chemotactic walk and performs only the chemotactic operation on its parent set. The selection of nodes by the three chemotaxis operations is not completely random; rather, they are selected one by one according to the preset random sequence. If node X is selected, we define three chemotaxis operations as follows:
  • Chemotactic edge addition: Greedily add nodes to the parent node set of X if this action will cause the BIC score to increase.
  • Chemotactic edge deletion: Greedily remove nodes from the parent node set of X if this action will cause the BIC score to increase.
  • Chemotactic edge reversal: Greedily reverse the parent node set of X if this action causes the BIC score to increase.
A chemotactic walk can efficiently explore the search space around particles. However, as the algorithm runs, the starting points of all the particles become the same, which is not conducive to escaping the local optimum. Therefore, we consider adding two kinds of jump operations, namely, deleting one parent node of the target node X before the chemotaxis operation begins or reversing one parent node of the target node X. For the selected node X, each particle can randomly choose to perform a chemotactic walk at the original position or perform a jump operation and then a chemotactic walk again. Three different chemotactic walk strategies can develop the local search space more fully and are conducive to escaping the local optimum. For large-scale and complex DAGs, unrestricted fractal search significantly increases the cost of the algorithm. In the original SFS algorithm, the number of Gaussian walk steps is also limited to a constant. In the chemotactic walk, we consider limiting the number of steps of the three chemotactic operation operators to a given identical constant. The maximum number of walks in the same direction (denoted as m k ) was set to 10 in our experiment.

3.2.2. Update Process

The update process simulates how particles update their positions on the basis of the positions of other particles. To explore the search space better, the update process is divided into two parts. The two parts adopt different random methods to update each dimension or all dimensions of the particles. In the first step, we sort the particles on the basis of the fitness function and assign a probability value to them. The formula is as follows:
P a i = rank ( P i ) n P o p
where rank ( P i ) represents the ranking of the particles and where n P o p represents the number of particles. Particles with worse positions are assigned a higher probability value to update their positions. For each dimension j of the particle P i , we randomly generate a random number ϵ between 0 and 1. When the probability value P a i assigned to the particle is less than ϵ , we update the position of the particle’s dimension j. The methods adopted for updating are as follows:
P i ( j ) = P r ( j ) ϵ × ( P t ( j ) P i ( j ) )
where P r and P t are two particles randomly selected apart from P i .
In the second step, we update all the dimensions of the particles on the basis of the positions of other particles. This can enhance the global exploration ability and increase the diversity of the population. First, we sort all the particles again, assign probability values, and update the positions of the particles according to the following formula.
P i = P i ϵ ^ × ( P t B P ) | ϵ 0.5
P i = P i + ϵ ^ × ( P t P r ) | ϵ > 0.5
where ϵ ^ and ϵ are random numbers between 0 and 1. During the entire update process, the new positions of the particles are generated through information exchange with other particles, which is similar to the learning mechanism in the DAG learning heuristic algorithm. To accelerate the convergence speed, we limit the object of information exchange to one. In the first step, we learn each dimension from the random particle. In the second step, we learn all the dimensions from the random particle or the optimal particle. We redefined the update formula as follows:
P i ( j ) = P i ( j ) [ r a n d ( P t ( j ) P i ( j ) ]
P i = P i [ R a n d ( P i B P ) ] | ϵ 0.5
P i = P i [ R a n d ( P i P t ) ] | ϵ > 0.5
where ⨁ and ⨂ represent A N D and X O R , respectively, in logical operations, and  R a n d represents a random number group or matrix with elements of 0 or 1. Since the position P of the particle is a matrix with elements of 0 or 1, after the operations of the above three formulas, it conforms exactly to the process of the DAG structure learning. This is a common method for binarizing DAG learning methods [17].

3.2.3. Knowledge Fusion

The hard constraints are directly given proportionally in our experiment and consist of two parts: the positive constraint rate p and the negative constraint rate q. We need to ensure that the new individuals generated in each generation must meet the hard constraints; otherwise, they will be regarded as illegal individuals and will not be updated in this iteration. In chemotactic walks, we also apply hard constraints to restrict unreasonable walks. The soft constraints contain a series of restricted search spaces ( S S 1 , S S 2 , and  S S 3 ) and identified V-structures. In our experiment, S S 2 is used as the search space for the SFS algorithm. Note that the search space naturally needs to exclude the negative edge constraints in the hard constraints. When learning the DAG, the initial population needs to be generated first. In our algorithm, the initial population is obtained by performing hill climbing operations on the narrow space S S 3 , starting from the directed edges contained in the acquired knowledge. The pseudocode of the SFS algorithm is shown in Algorithm 5. In each generation, each particle successively undergoes diffusion and update processes. When the iteration stop condition is reached, the algorithm stops and outputs the optimal structure.
Algorithm 5: Stochastic Fractal Search
Fractalfract 09 00394 i005

4. Experiments

In this section, we first evaluate the performance of the WLBRC method in mining soft constraint knowledge, compare it with six MB discovery algorithms to evaluate the performance in the search space, and compare it with six algorithms that can identify V-structures to evaluate the performance in identifying V-structures. The six MB discovery algorithms are IAMB, STMB, PCMB, BAMB, LCS-FS, and MMPC [23]. The six algorithms that can identify the V-structure are the GS, CMB, PCD-by-PCD, LCS-FS, PC-Stable, and F2SL_c [24] algorithms. Then, we evaluated the performance of the FS-SFS algorithm on different networks and datasets. Finally, the FS-SFS algorithm was compared with five known BN structure learning algorithms on different datasets. The five comparison algorithms are the PC-stable, GS, F2SL, MMHC, and BNC-PSO algorithms. All the algorithms are run in MATLAB R2020a, and all the following experiments are performed on an AMD 1.7 GHz CPU with 16 GB of RAM.

4.1. Datasets and Evaluation Metrics

To evaluate the performance of the FS-SFS algorithm, we selected six standard Bayesian networks from the BNLEARN repository (https://www.bnlearn.com/bnrepository/ and accessed on 29 April 2025) and collected 1000, 3000, 5000, and 10,000 samples for each network. The summaries of the six Bayesian networks are shown in Table 1. To better test the performance of the proposed algorithm, we selected the network with more nodes. In the BNLEARN repository, the Alarm network is a medium network; the Hepar2 and Win95pts networks are large networks; and the Munin, Andes, and Pigs networks are very large networks.
To evaluate the search performance of the SFS algorithm, we adopt the following indicators:
  • BIC: The BIC score of the output optimal network.
  • AE: The number of edges in the output optimal network that were incorrectly added.
  • DE: The number of edges in the output optimal network that were incorrectly deleted.
  • RE: The number of edges in the output optimal network that were incorrectly reversed.
  • SHD: The hamming distance between the output structure and the original structure.
  • RT: The running time of the SFS algorithm.
  • F1: The evaluation index of graph accuracy; its calculation formula is F 1 = 2 × Precision Recall / ( Precision + Recall ) .
Two main performance indicators were used to evaluate the performance of the proposed feature selection method: precision (P) and recall (R). Precision represents the number of correct edges in the set divided by the total number of edges in the set, and recall represents the number of correct edges in the set divided by the true number of edges in the original set. Recall and precision often have an inverse relationship. In some cases, increasing the recall rate may reduce the precision rate, and vice versa. The search space should adopt the principle of recall priority because a high recall rate can improve the completeness of the search space, which is crucial to the performance of the subsequent score search algorithm. The directional V-structure should adopt the principle of precision priority because high accuracy can improve the search efficiency of the subsequent score search algorithm. To quantitatively evaluate the performance of each algorithm in the search space and the directional V-structure, we adopt a more general form F β of F1 to express our different preferences for precision and recall, which is defined as
F β = ( 1 + β 2 ) × P × R β 2 × P + R
when β > 1 , the recall rate has a greater impact, and when β < 1 , the precision rate has a greater impact. In our experiment, β was set to 5 when the search space was compared and to 0.2 when the V-structure was compared.

4.2. Soft Constraint Knowledge Mining

From the previous section, we know that soft constraint knowledge contains a series of search spaces and sets of directed edges. We report the precision and recall rates in each search space and the set of directed edges in Table 2.
For search spaces, a higher recall indicates a better completeness of the search space, and a higher precision can enhance search efficiency. By comparing S S 1 , S S 2 , and S S 3 , we find that the precision of S S 1 is too low, which reduces the search efficiency, whereas S S 3 has the lowest recall, which affects the graph accuracy of the final output network. Therefore, we consider S S 2 as the search space. For S S 2 , we found that on the Alarm and Pigs networks, the recall rate fluctuated very little compared with that of S S 1 . However, on the other four networks, the recall rate decreases to varying degrees, which may cause the final output network to lose some edges. For S S 3 , the precision is relatively high, and the initial population obtained by climbing within it has a very high score.
For the V set, the precision indicates the accuracy of the identified direction. The higher the precision is, the more reliable the prior knowledge is. The recall indicates the sufficiency of soft constraint knowledge. The higher it is, the more abundant the soft constraint knowledge is. For the Alarm, Win95pts, Andes, and Pigs networks, the precision exceeds 90%, which indicates that the orientation of the V-structure is very reliable for these four networks. For the Hepar2 network and the Munin network, the recall is significantly lower than that of the other four networks, and the precision is also lower than that of the other four networks. This indicates that sufficient and reliable prior knowledge has not been mined on these two networks.
To further illustrate the performance of the algorithm we proposed in soft constraint knowledge mining, we compared the results of six local structure learning algorithms in terms of the search space and V-structure identification. The adopted dataset sizes are 1000 and 10,000, which are the minimum and maximum values of the datasets collected in our experiment, respectively.
Table 3 reports the performance comparison with six local MB discovery algorithms in terms of the search space. For the search space, compared with precision, we should prioritize the recall rate. F 5 is a harmonic value that we set on the basis of this preference. As the sample size increased from 1000 to 10,000, the recall rates of all the algorithms improved significantly, except for LCS-FS. Moreover, the recall rate of LCS-FS is also significantly lower than that of the other algorithms on most datasets. The reason for the analysis is that LCS-FS adopts the FCBF method for the discovery of approximate MB. As explained in the previous section, this method is prone to incorrectly deleting important features, so its recall rate cannot be guaranteed.
The IAMB algorithm adopts the direct learning strategy, whereas the STMB, PCMB, and BAMB algorithms adopt the divide-and-conquer strategy. Although theoretically, the direct learning strategy has high computational efficiency but poor accuracy when processing high-dimensional data, from our experimental results, it seems not to be absolute. The IAMB algorithm is significantly superior to the PCMB and BAMB algorithms in terms of time performance, but it outperforms the STMB algorithm on the Win95pts and Andes networks. The precision of the STMB algorithm is significantly lower than that of the IAMB, PCMB, and BAMB algorithms. MMPC is currently a commonly used search space establishment algorithm in Bayesian network heuristic algorithms. It prefers precision when establishing the search space, which is also the common preference of other comparison algorithms. However, the improvement in precision often reduces the recall rate. Among the comparison algorithms, the STMB algorithm with the lowest precision maintains a relatively high recall rate. Here, our aim is not to explain the shortcomings of other algorithms but rather to highlight their inappropriateness when used to establish the search space. This is also the motivation for us to develop new local structure learning algorithms.
Table 4 reports the performance comparison with six algorithms in identifying the V-structure. To identify the V-structure, compared with the recall rate, we should prioritize the precision rate. F 0.2 is the harmonic value we set according to this preference. The GS algorithm achieves high precision on the other five networks except the Munin network. However, its low precision on the Munin network and low recall rate on all datasets indicate that its performance in identifying V-structures is weaker than that of other algorithms. Both the LCS-FS and F2SL_c algorithms adopt the FCBF method. Therefore, the recall rate of their V-structure cannot be guaranteed. In particular, for the Hepar2 network, its recall rate is only 1.63%, and the recall rate does not increase when the sample size increases to 10,000. This is also the motivation for why we introduce the LBRC method to rank the importance to avoid the omission of important features. The time performance of the CMB and PCD-by-PCD algorithms is significantly lower than that of the other algorithms. The PC-Stable algorithm is a generally recognized as a constraint-based DAG learning method, but its precision on the Munin1000 is only 5.51%.
To comprehensively illustrate the performance of the WLBRC algorithm in mining soft constraint knowledge, we report in Table 5 the number of wins and losses of the proposed algorithm compared with other algorithms in searching the space and identifying the V-structure. For the search space, the WLBRC algorithm outperforms other algorithms in terms of time and recall rate and is only on par with MMPC in terms of comprehensive indicators F 5 . For the V-structure, the WLBRC algorithm outperforms the other algorithms in comprehensive indicators F 0.2 and only loses to the F2SL algorithm in terms of time and precision. However, the F2SL algorithm adopts the FCBF method. Its performance in the search space is consistent with that of the LCF-FS algorithm, and its recall rate is lower than that of the WLBRC algorithm. Overall, the WLBRC algorithm sacrifices the precision rate in the search space to ensure the recall rate and uses the recall rate in identifying the V-structure to ensure the precision rate. This preference leads to some indicators being inferior to those of other algorithms. However, we believe that the proposed algorithm is more suitable for solving the problem of soft constraint knowledge mining in DAG learning.

4.3. Learning BNs via the FS-SFS Algorithm

The parameters of the FS-SFS algorithm are few and simple. As shown in Table 6, these parameters can be set directly without repeated optimization. To evaluate the performance of the SFS algorithm, we report the experimental results under different Bayesian network structure training sets in Table 7. For each dataset, we report the average and standard deviation of each metric after 10 runs.
From the perspective of the BIC score, the standard deviations of the FS-SFS algorithm on the four datasets of the Alarm and Pigs networks are 0, which indicates that the algorithm can stably learn a network structure with the same BIC score on these two networks. Interestingly, on Alarm1000, although the acquired networks have the same BIC score, the F1 score and RE fluctuate, which indicates that the acquired networks have different structures with the same BIC score. For the other four networks, except for Munin3000, Munin5000, Munin10000, and Win95pts5000, the standard deviations of the network learned by the algorithm on the other datasets are all single digits, which is relatively small. An important reason for the analysis of the BIC score fluctuation on the Munin network is that its complex structure makes it impossible for us to learn sufficient soft constraint knowledge from the data, thereby resulting in changes in the final output structure. Overall, the SFS algorithm has very good stability in the search for scores.
From the point of view of structural errors, the SHD tends to decrease with increasing sample size and fluctuates very little around the mean. For the Alarm and Pigs networks, the structural errors of the learned networks have almost no fluctuations. Among the other four networks, the average value of structural errors is relatively large and concentrated in DE and AE. By observing the experimental results in the previous section, we can determine that the reason is the low recall of the search space. With increasing sample size, the recall of the search space increases, and DE and AE decrease. This finding indicates that increasing the sample size is conducive to restoring the wrongly deleted edges in the learning network, and the addition of the correct edges also limits the entry of the wrong edges to a certain extent. Furthermore, we find that the standard deviations of the structural errors AE, DE, and RE are all very small relative to the mean. This indicates that the structural errors between the learned networks are very small; that is, the output results of the SFS algorithm have good stability in terms of graph accuracy. With increasing sample size, the F1 score of the learning network also tends to increase, which indicates that appropriately increasing the sample size is helpful for improving the graph accuracy of our algorithm.
From the point of view of running time, with increasing sample size, the running time did not surge sharply but increased slowly, which indicates that our algorithm is suitable for processing large-scale datasets. However, as the network scale increases, the running time of the algorithm increases significantly, which indicates that the time performance of our algorithm still needs to be further optimized.
To test the impact of parameter changes on the performance of the algorithm, we floated the three hyper-parameters ( p , q , n P o p ) up and down. Table 8 reports the performance after parameter changes on the two networks of Alarm and Hepar2. The sample sizes were selected as 1000 and 10,000, and each parameter change was run only once. For the four datasets, after the parameter n P o p changes, the changes in each performance index are not obvious, which indicates that the performance of the algorithm is not sensitive to the parameter n P o p . For p, all three performance indicators change significantly. The trend is that reducing p increases the BIC score and SHD while reducing the F1 score. Increasing p increases the F1 score and reduces the BIC score and SHD. Theoretically, the higher the positive constraint rate p of expert knowledge is, the better the performance of the algorithm should be. The reason for the inconsistency between the F1 score and the BIC score is that the data are not faithful to the underlying network. For q, there are changes in three performance indicators, but they are not obvious. The trend is that the smaller q is, the higher the BIC score will be. The reason is that its up and down fluctuations cause a reduction in and expansion of the search space, and a change in the search space affects the output results of the algorithm. The sensitivity of algorithm performance to the search space motivated us to develop new local structure learning algorithms.

4.4. Comparison with Some Other Algorithms

To make the comparison fair, the FS-SFS algorithm only integrates the knowledge of soft constraints, and the hard constraints are reset to 0. Furthermore, for the BNC-PSO algorithm, which is also a meta-heuristic, we also integrate soft constraint knowledge; that is, it has the same initial population and search space. For the two constraint-based algorithms, PC-stable and GS, we still calculate their BIC scores for convenient comparison. Table 9 and Table 10 present the BIC scores and F1 scores, respectively, of the output structures of each algorithm under different datasets. The best results of each dataset are displayed in bold. Since the FS-SFS algorithm is a score-based algorithm, we report a two-sided Wilcoxon signed-rank test of the BIC scores in Table 9 to observe whether the output results are statistically reliable. The P-value of the test is marked with an asterisk after the BIC score of the comparison algorithm.
As shown in Table 9, the BIC scores of the FS-SFS algorithm on all datasets are higher than those of the other algorithms, which indicates that the SFS algorithm has a strong global search ability in score search. For the constraint-based algorithms GS and PC-Stable, the BIC score of the output network is significantly lower than that of the algorithm based on score search, especially when the network scale is large, which makes the gap more obvious. Furthermore, the PC-Stable algorithm cannot calculate the score on five datasets. The four score-based algorithms are combinations of local search methods and global search methods. Among them, the F2SL algorithm and the MMHC algorithm adopt the same global search method (hill climbing method). The F2SL algorithm outperforms the MMHC algorithm on the Alarm and Munin networks but loses to the MMHC algorithm on the other four networks. This finding indicates that the local search methods adopted by both have low robustness and cannot guarantee good performance when facing different BNs. The F2SL algorithm and the MMHC algorithm both lose to the meta-heuristic BNC-PSO algorithm and the FS-SFS algorithm on all networks. This indicates that the meta-heuristic method may have better global search capabilities in terms of score search than the hill climbing method does. Since the BNC-PSO algorithm and the FS-SFS algorithm adopt the same local search method, we believe that the global search capability of the latter is greater than that of the former.
As shown in Table 10, the FS-SFS algorithm has the highest F1 score on the 14/24 datasets. Interestingly, the datasets where F1 scores win are different from those where the BIC scores win. One important reason for having a high BIC score but a low F1 score is that the data are not faithful to the underlying BN. Since the algorithms involved in the comparison are either constraint-based algorithms or integrate local search methods, as the sample size increases, the F1 scores of each algorithm improve. For the constraint-based algorithms, the F1 scores of the GS algorithm on the Munin and Pigs networks are significantly lower than those of the other algorithms, and the F1 scores of the other four networks are also lower than those of the PC-Stable algorithm. Obviously, the performance of PC-Stable is better than that of the GS algorithm. However, when the sample size is insufficient, the performance of the PC-Stable algorithm will be severely challenged. For example, on the Munin1000 dataset, the F1 score is 0.1180, which is much lower than those of other score-based algorithms. For the Munin10000 dataset, the F1 score becomes 0.5380, surpassing those of the F2SL, MMHC, and BNC-PSO algorithms. This indicates that the performance of the constraint-based algorithm is constrained by the sample size. Similarly, the four score-based algorithms involved in the comparison integrate local search methods, and their performances are affected by the sample size. We compared the F1 scores of the four score-based algorithms when the sample size was 1000. The FS-SFS algorithm won three times, and the MMHC algorithm won two times. This finding indicates that FS-SFS is more suitable for handling small sample datasets.
The above experimental results show that the FS-SFS algorithm under soft constraints has a greater global search ability in score search, and the graph accuracy of the learning network is also superior to that of other comparison algorithms. However, when addressing issues such as insufficient sample size and large-scale DAG learning, a high BIC score of the output structure often reduces the F1 score of the output structure. The hard constraint of expert knowledge can alleviate this problem, outputting high-score structures while having high graph accuracy and thus strong robustness and accuracy. In conclusion, both soft and hard constraints have significant influences on the performance of the SFS algorithm. The local structure learning algorithm based on the feature selection we propose is suitable for solving soft constraint knowledge mining in the BN structure learning problem.

5. Conclusions and Future Research

In this paper, a new binary SFS algorithm is proposed to solve the problem of BN structure learning. Moreover, a new local structure learning method based on feature selection and conditional independence testing was also proposed, which can achieve a high recall rate in the search space and high precision regarding causal orientation. Using the soft constraints obtained via this local structure learning method, expert knowledge is set as the hard constraints, and the soft/hard constraints are integrated into the SFS algorithm as prior knowledge. The performance of the SFS algorithm under soft/hard constraints was explored. The experimental results show that our proposed SFS algorithm has better global search ability and graph accuracy than other algorithms. In the future, we will be committed to optimizing the mining and application strategies of soft constraint knowledge. Moreover, it is also necessary to optimize the fractal strategy of the SFS algorithm to improve its time performance.

Author Contributions

Conceptualization, Y.D.; methodology, Y.D.; software, Y.D.; validation, Y.D.; formal analysis, Y.D. and Z.W.; investigation, Y.D.; resources, Y.D.; data curation, Y.D. and Z.W.; writing—original draft preparation, Y.D.; writing—review and editing, Y.D.; visualization, Y.D.; supervision, X.G.; project administration, Y.D. and Z.W.; funding acquisition, X.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (61573285), the Fundamental Research Funds for the Central Universities, China (No. G2022KY0602), and the key core technology research plan of Xi’an, China (No. 21RGZN0016).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The true networks of all datasets are known, and they are publicly available (http://www.bnlearn.com/bnrepository, accessed on 1 May 2025).

Acknowledgments

I have benefited from the presence of my supervisor and classmates. I am very grateful to my supervisor Xiaoguang Gao who gave me encouragement, careful guidance, and helpful advice throughout the writing of this thesis.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
BICBayesian information criterion
BNBayesian network
CIConditional independence
CMIConditional mutual information
DAGDirected acyclic graph
FCBFFast correlation-based filter
FSFeature selection
F2SLFeature Selection-based Structure Learning
MBMarkov blanket
MDLMinimal description length
MIMutual information
MMHCMax–min hill-climbing
PCParent–child
PNPotential neighbor
PSOParticle swarm optimization
SFSStochastic Fractal Search
SSSearch space
SUSymmetric uncertainty

References

  1. Yang, J.; Jiang, L.F.; Xie, K.; Chen, Q.Q.; Wang, A.G. Lung nodule detection algorithm based on rank correlation causal structure learning. Expert Syst. Appl. 2023, 216, 119381. [Google Scholar] [CrossRef]
  2. McLachlan, S.; Dube, K.; Hitman, G.A.; Fenton, N.E.; Kyrimi, E. Bayesian networks in healthcare: Distribution by medical condition. Artif. Intell. Med. 2020, 107, 101912. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, Z.W.; Wang, Z.W.; He, S.W.; Gu, X.W.; Yan, Z.F. Fault detection and diagnosis of chillers using Bayesian network merged distance rejection and multi-source non-sensor information. Appl. Energy 2017, 188, 200–214. [Google Scholar] [CrossRef]
  4. Tien, I.; Kiureghian, A.D. Algorithms for Bayesian network modeling and reliability assessment of infrastructure systems. Reliab. Eng. Syst. Saf. 2016, 156, 134–147. [Google Scholar] [CrossRef]
  5. Koller, D.; Friedman, N. Probabilistic Graphical Models: Principles and Techniques; MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  6. Spirtes, P.; Glymour, C.; Scheines, R. Causation, Prediction, and Search, 2nd ed.; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  7. Colombo, D.; Maathuis, M.H. Order-Independent Constraint-Based Causal Structure Learning. J. Mach. Learn. Res. 2014, 15, 3741–3782. [Google Scholar]
  8. Chickering, D.M. Optimal structure identification with greedy search. J. Mach. Learn. Res. 2003, 3, 507–554. [Google Scholar]
  9. de Campos, C.P.; Ji, Q. Efficient Structure Learning of Bayesian Networks using Constraints. J. Mach. Learn. Res. 2011, 12, 663–689. [Google Scholar]
  10. Yuan, C.; Malone, B. Learning Optimal Bayesian Networks: A Shortest Path Perspective. J. Artif. Intell. Res. 2013, 48, 23–65. [Google Scholar] [CrossRef]
  11. Cussens, J.; Järvisalo, M.; Korhonen, J.H.; Bartlett, M. Bayesian Network Structure Learning with Integer Programming: Polytopes, Facets and Complexity. J. Artif. Intell. Res. 2017, 58, 185–229. [Google Scholar] [CrossRef]
  12. Cooper, G.F.; Herskovits, E. A Bayesian method for the induction of probabilistic networks from data. Mach. Learn. 1992, 9, 309–347. [Google Scholar] [CrossRef]
  13. Lee, J.; Chung, W.Y.; Kim, E. Structure learning of Bayesian networks using dual genetic algorithm. IEICE Trans. Inf. Syst. 2008, 91, 32–43. [Google Scholar] [CrossRef]
  14. Cui, G.; Wong, M.L.; Lui, H.K. Machine learning for direct marketing response models: Bayesian networks with evolutionary programming. Manag. Sci. 2006, 52, 597–612. [Google Scholar] [CrossRef]
  15. Gámez, J.A.; Puerta, J.M. Searching for the best elimination sequence in Bayesian networks by using ant colony optimization. Pattern Recognit. Lett. 2002, 23, 261–277. [Google Scholar] [CrossRef]
  16. Askari, M.B.A.; Ahsaee, M.G.; IEEE. Bayesian network structure learning based on cuckoo search algorithm. In Proceedings of the 6th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS), Shahid Bahonar Univ Kerman, Kerman, Iran, 28 February–2 March 2018; pp. 127–130. [Google Scholar]
  17. Wang, J.Y.; Liu, S.Y. Novel binary encoding water cycle algorithm for solving Bayesian network structures learning problem. Knowl.-Based Syst. 2018, 150, 95–110. [Google Scholar] [CrossRef]
  18. Sun, B.D.; Zhou, Y.; Wang, J.J.; Zhang, W.M. A new PC-PSO algorithm for Bayesian network structure learning with structure priors. Expert Syst. Appl. 2021, 184, 11. [Google Scholar] [CrossRef]
  19. Gheisari, S.; Meybodi, M.R. BNC-PSO: Structure learning of Bayesian networks by Particle Swarm Optimization. Inf. Sci. 2016, 348, 272–289. [Google Scholar] [CrossRef]
  20. Ji, J.Z.; Wei, H.K.; Liu, C.N. An artificial bee colony algorithm for learning Bayesian networks. Soft Comput. 2013, 17, 983–994. [Google Scholar] [CrossRef]
  21. Yang, C.C.; Ji, J.Z.; Liu, J.M.; Liu, J.D.; Yin, B.C. Structural learning of Bayesian networks by bacterial foraging optimization. Int. J. Approx. Reason. 2016, 69, 147–167. [Google Scholar] [CrossRef]
  22. Wang, X.C.; Ren, H.J.; Guo, X.X. A novel discrete firefly algorithm for Bayesian network structure learning. Knowl.-Based Syst. 2022, 242, 10. [Google Scholar] [CrossRef]
  23. Tsamardinos, I.; Brown, L.E.; Aliferis, C.F. The max-min hill-climbing Bayesian network structure learning algorithm. Mach. Learn. 2006, 65, 31–78. [Google Scholar] [CrossRef]
  24. Yu, K.; Ling, Z.; Liu, L.; Li, P.; Wang, H.; Li, J. Feature Selection for Efficient Local-to-global Bayesian Network Structure Learning. ACM Trans. Knowl. Discov. Data 2024, 18, 37:1–37:27. [Google Scholar] [CrossRef]
  25. Margaritis, D.; Thrun, S. Bayesian network induction via local neighborhoods. Adv. Neural Inf. Process. Syst. 1999, 12, 505–511. [Google Scholar]
  26. Tsamardinos, I.; Aliferis, C. Towards Principled Feature Selection: Relevancy, Filters and Wrappers. In Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Key West, FL, USA, 3–6 January 2003. [Google Scholar]
  27. Tsamardinos, I.; Aliferis, C.F.; Statnikov, A. Time and sample efficient discovery of Markov blankets and direct causal relations. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 24–27 August 2003; pp. 673–678. [Google Scholar]
  28. Aliferis, C.F.; Statnikov, A.R.; Tsamardinos, I.; Mani, S.; Koutsoukos, X.D. Local Causal and Markov Blanket Induction for Causal Discovery and Feature Selection for Classification Part I: Algorithms and Empirical Evaluation. J. Mach. Learn. Res. 2010, 11, 171–234. [Google Scholar]
  29. Peña, J.M.; Nilsson, R.; Björkegren, J.; Tegnér, J. Towards scalable and data efficient learning of Markov boundaries. Int. J. Approx. Reason. 2007, 45, 211–232. [Google Scholar] [CrossRef]
  30. Gao, T.; Ji, Q. Efficient Markov Blanket Discovery and Its Application. IEEE Trans. Cybern. 2016, 47, 1169–1179. [Google Scholar] [CrossRef]
  31. Ling, Z.; Yu, K.; Wang, H.; Liu, L.; Ding, W.; Wu, X. BAMB: A Balanced Markov Blanket Discovery Approach to Feature Selection. ACM Trans. Intell. Syst. 2019, 10, 1–25. [Google Scholar] [CrossRef]
  32. Wang, H.; Ling, Z.; Yu, K.; Wu, X. Towards efficient and effective discovery of Markov blankets for feature selection. Inf. Sci. 2020, 509, 227–242. [Google Scholar] [CrossRef]
  33. Yin, J.; Zhou, Y.; Wang, C.; He, P.; Geng, Z. Partial orientation and local structural learning of causal networks for prediction. In Causation and Prediction Challenge; PMLR: Birmingham, UK, 2008. [Google Scholar]
  34. Koller, D.; Friedman, N. Local Causal Discovery of Direct Causes and Effects; MIT Press: Cambridge, MA, USA, 2015. [Google Scholar]
  35. Ling, Z.; Yu, K.; Wang, H.; Li, L.; Wu, X. Using Feature Selection for Local Causal Structure Learning. IEEE Trans. Emerg. Top. Comput. Intell. 2021, 5, 530–540. [Google Scholar] [CrossRef]
  36. Salimi, H. Stochastic Fractal Search: A powerful metaheuristic algorithm. Knowl.-Based Syst. 2015, 75, 1–18. [Google Scholar] [CrossRef]
  37. Nguyen, T.P.; Vo, D.N. A Novel Stochastic Fractal Search Algorithm for Optimal Allocation of Distributed Generators in Radial Distribution Systems. Appl. Soft Comput. 2018, 70, 773–796. [Google Scholar] [CrossRef]
  38. Khalilpourazari, S.; Reza Pasandide, S.H.; Akhavan Niaki, S.T. Optimization of multi-product economic production quantity model with partial backordering and physical constraints: SQP, SFS, SA, and WCA. Appl. Soft Comput. 2016, 49, 770–791. [Google Scholar] [CrossRef]
  39. Betka, A.; Terki, N.; Toumi, A.; Hamiane, M.; Ourchani, A. A new block matching algorithm based on stochastic fractal search. Appl. Intell. 2019, 49, 1146–1160. [Google Scholar] [CrossRef]
  40. Yang, Y.; Espin, C.G.S.; Al-Khafaji, M.O.; Kumar, A.; Velasco, N.; Abdulameer, S.F.; Alawadi, A.; Alam, M.M.; Dadabaev, U.A.U.; Mayorga, D. Development of a mathematical model for investigation of hollow-fiber membrane contactor for membrane distillation desalination. J. Mol. Liq. 2024, 404, 124907. [Google Scholar] [CrossRef]
  41. Kitson, N.K.; Constantinou, A.C.; Guo, Z.G.; Liu, Y.; Chobtham, K. A survey of Bayesian Network structure learning. Artif. Intell. Rev. 2023, 56, 8721–8814. [Google Scholar] [CrossRef]
  42. Zhang, Y.S.; Zhu, R.L.; Chen, Z.J.; Gao, J.; Xia, D. Evaluating and Selecting Features via Information Theoretic Lower Bounds of Feature Inner Correlations for High-Dimensional Data. Eur. J. Oper. Res. 2021, 290, 235–247. [Google Scholar] [CrossRef]
Table 1. Summary of networks.
Table 1. Summary of networks.
NetworkNodesEdgesMax.indegMax.outdegDomain Range
alarm3746452–4
hepar2701236172–4
win95pts761127102–2
munin1892823151–21
andes2233386122–2
pigs4415922393–3
Table 2. Soft-constrained knowledge mining performance.
Table 2. Soft-constrained knowledge mining performance.
NetworkDataset SS 1 SS 2 SS 3 V
PRPRPRPR
alarm10000.18910.97830.48890.95650.88370.826110.4348
30000.18150.97830.51760.95650.92860.847810.5000
50000.17580.97830.49450.97830.95350.891310.5652
10,0000.16730.97830.50560.97830.97620.89130.96300.5652
hepar210000.27020.62600.42050.60160.73530.40650.44830.1057
30000.26480.76420.45180.72360.79520.53660.66670.1789
50000.25120.84550.46120.82110.83700.62600.66670.2114
10,0000.23390.88620.47710.84550.88540.69110.78050.2602
win95pts10000.19510.91960.49140.76790.90380.41960.86360.1696
30000.17860.93750.44750.87500.89470.45540.96550.2500
50000.17230.95540.44100.90180.89860.55360.97830.4018
10,0000.17750.97320.44830.92860.89710.54460.93330.3750
munin10000.03300.79850.09380.68500.32510.33700.62500.0733
30000.02940.83520.13200.74360.49770.40290.82460.1722
50000.02830.85710.13450.77660.61060.46520.85940.2015
10,0000.02770.86450.12970.81320.62380.46150.86210.1832
andes10000.13750.78700.21350.75740.89900.55330.94660.3669
30000.13270.88460.23930.86090.92440.65090.97160.5059
50000.12740.91720.24020.89050.90700.69230.98340.5266
10,0000.12400.95560.24840.93200.92670.74850.98590.6213
pigs10000.043010.141410.515710.99360.7889
30000.035710.137610.645610.99810.9020
50000.034010.134110.688410.99650.9493
10,0000.031410.131410.7522110.9696
Table 3. The performance of different local structure learning algorithms in generating search spaces. ‘-’ indicates that no result has been output after running for one hour.
Table 3. The performance of different local structure learning algorithms in generating search spaces. ‘-’ indicates that no result has been output after running for one hour.
NetworkAlgorithm100010,000
F 5 PRRT F 5 PRRT
alarmIAMB0.90020.66670.91300.170.94000.65670.95651.27
STMB0.85540.27390.93480.510.87440.23940.97831.76
PCMB0.81990.69090.82611.530.90320.71190.913011.23
BAMB0.91790.63240.93480.630.93850.63770.95655.37
LCS_FS0.77940.70590.78260.190.75830.70000.76091.63
MMPC0.92780.78180.93480.360.97180.83330.97830.86
WLBRC0.92260.48890.95650.150.94430.50560.97830.78
hepar2IAMB0.59070.52900.59350.620.80310.44930.82936.59
STMB0.58850.48670.59350.620.85640.37630.902416.60
PCMB0.34980.89360.34152.270.66230.77140.6585171.22
BAMB0.49510.78950.48780.650.82490.72860.829312.54
LCS_FS0.21790.96300.21140.270.22620.96430.21952.70
MMPC0.54730.62040.54470.760.81860.75940.82115.39
WLBRC0.59180.42050.60160.240.82110.47710.84551.66
win95ptsIAMB0.70180.48780.71430.770.85290.45410.88398.01
STMB0.66360.19850.73210.820.71760.12580.88397.99
PCMB0.42710.77050.41962.000.70700.56340.714345.40
BAMB0.66460.55970.66961.930.86930.52360.892929.42
LCS_FS0.58080.59090.58040.630.56970.52890.57145.99
MMPC0.70540.70540.70540.790.88530.72990.89292.46
WLBRC0.75160.49140.76790.430.89180.44830.92862.23
muninIAMB0.42910.57140.42492.290.58970.58760.589714.63
STMB--------
PCMB0.14820.20730.146549.250.45560.40580.45791438.23
BAMB--------
LCS_FS0.55700.20820.59716.730.52770.26500.549565.05
MMPC0.45910.09240.545811.450.66160.35690.685040.26
WLBRC0.55140.09380.68507.090.67610.12970.813232.41
andesIAMB0.71060.30090.75159.930.83360.25430.9172146.06
STMB0.68180.19020.76046.400.77570.14750.934964.78
PCMB0.59340.63900.591726.970.78710.57570.7988312.44
BAMB0.69460.56290.70127.990.86300.56440.881789.52
LCS_FS0.42670.70300.42013.640.45060.72820.443835.02
MMPC0.72120.54410.73085.940.88920.61570.905317.03
WLBRC0.68980.21350.75742.420.84280.24840.932012.03
pigsIAMB0.97350.63270.994918.100.93810.36821216.75
STMB0.58130.05071110.240.46270.032111874.75
PCMB--------
BAMB0.97730.62381212.06----
LCS_FS0.96610.69500.981422.070.98620.73361228.73
MMPC0.97080.5611127.820.97150.56701314.17
WLBRC0.81070.1414113.550.79730.13141127.96
Table 4. The performance of different structure learning algorithms in identifying V-structures. ‘-’ indicates that no result has been output after running for one hour.
Table 4. The performance of different structure learning algorithms in identifying V-structures. ‘-’ indicates that no result has been output after running for one hour.
NetworkAlgorithm100010,000
F 0 . 2 PRRT F 0 . 2 PRRT
alarmGS0.541710.04350.110.48600.54550.13040.75
CMB0.77500.80000.43484.110.71640.72730.521713.71
PCD-by-PCD0.80500.83330.43481.530.95010.96770.65228.91
LCS_FS0.92570.95830.50001.150.88540.91670.47837.46
PC-Stable0.94720.96670.63040.270.95270.96880.67390.77
F2SL_c0.952410.43480.080.932710.34780.59
WLBRC0.952410.43480.150.93760.96300.56520.68
hepar2GS0.48300.75000.04880.400.78260.94740.14632.32
CMB0.79290.95000.15457.030.68780.72920.2846546.88
PCD-by-PCD0.25270.28570.06501.770.38810.41300.154574.64
LCS_FS0.300610.01630.790.300610.01637.02
PC-Stable0.43880.48480.13010.270.66630.68570.39023.85
F2SL_c0.300610.01630.170.300610.01631.46
WLBRC0.39860.44830.10570.240.72470.78050.26021.55
win95ptsGS0.757310.10710.450.941910.38395.41
CMB0.66970.71790.25005.590.77060.79370.4464105.28
PCD-by-PCD0.59970.68000.15184.580.92280.96230.455426.09
LCS_FS0.81970.86670.34825.730.77800.81130.383939.36
PC-Stable0.62840.72730.14290.300.90940.92960.58931.93
F2SL_c0.83240.89470.30360.240.84470.89130.36611.91
WLBRC0.74620.86360.16960.430.88280.93330.37502.03
muninGS0000.970.10990.25000.00735.92
CMB--------
PCD-by-PCD--------
LCS_FS0.28870.29060.2491160.830.40120.41290.23441715.66
PC-Stable0.05510.05350.1941197.850.55580.57330.315092.89
F2SL_c0.52050.56790.16851.640.59800.65060.197814.50
WLBRC0.48460.62500.07337.090.75450.86210.183231.28
andesGS0.85920.94620.26042.480.73350.79440.251517.11
CMB0.81680.84940.417240.390.81200.83780.45863108.60
PCD-by-PCD0.81950.87600.313630.180.91860.94680.5266185.10
LCS_FS0.87560.96700.260412.970.90930.99020.298895.72
PC-Stable0.93000.95790.53852.240.97010.98090.76048.26
F2SL_c0.89410.98890.26331.540.90210.98960.281114.22
WLBRC0.89230.94660.36692.420.96420.98590.621311.48
pigsGS0.38350.88240.02532.070.45080.73530.04229.48
CMB--------
PCD-by-PCD--------
LCS_FS0.97530.97590.959567.35111383.33
PC-Stable0.75430.75420.756821.540.99840.99831285.29
F2SL_c0.99420.99640.94097.1811159.15
WLBRC0.98380.99360.788913.550.998810.969668.72
Table 5. Performance comparison of WLBRC algorithm with other algorithms (Win/Tie/Lose).
Table 5. Performance comparison of WLBRC algorithm with other algorithms (Win/Tie/Lose).
Algorithm F 5 RRT Algorithm F 0.2 PRT
IAMB9/0/311/1/010/0/2 GS8/0/47/1/47/0/5
STMB11/0/16/3/312/0/0 CMB11/0/111/0/112/0/0
PCMB12/0/012/0/012/0/0 PCD-by-PCD10/0/210/0/212/0/0
BAMB8/0/411/1/012/0/0 LCS_FS10/0/26/1/512/0/0
LCS_FS9/0/311/1/011/0/1 PC-Stable7/0/59/0/38/0/4
MMPC6/0/69/3/012/0/0 F2SL_c6/1/53/2/71/0/11
Table 6. Parameters of the FS-SFS.
Table 6. Parameters of the FS-SFS.
Param.ValueDescriptions
p0.1The constraint rate of the edge that is sure to exist
q0.5The constraint rate of the nonexistent edge
n P o p 50The population size
m w 10The maximum number of walks in the same direction
μ log(m)A threshold for determining the highest BIC increase
L max min ( N , 200 ) The maximum number of unpromoted iterations allowed
MaxIt 2000The maximum number of iterations allowed
Table 7. Performance of FS-SFS algorithm on different datasets.
Table 7. Performance of FS-SFS algorithm on different datasets.
NetworkDatasetBICAEDERESHDRTF1
alarm1000 1.2490 × 10 4 ± 00 ± 05 ± 02.40 ± 0.557.40 ± 0.5543.120.8874 ± 0.0119
3000 3.4811 × 10 4 ± 00 ± 01 ± 02 ± 03 ± 073.950.9451 ± 0
5000 5.6914 × 10 4 ± 00 ± 01 ± 01 ± 02 ± 072.880.9670 ± 0
10,000 1.1201 × 10 5 ± 01 ± 01 ± 02 ± 04 ± 0110.390.9348 ± 0
hepar21000 3.4052 × 10 4 ± 5.554.40 ± 0.5263.60 ± 0.522.20 ± 0.7970.20 ± 0.79144.210.6124 ± 0.0067
3000 1.0017 × 10 5 ± 03 ± 051 ± 02 ± 056 ± 0190.770.7071 ± 0
5000 1.6603 × 10 5 ± 2.752 ± 041.60 ± 0.522 ± 045.60 ± 0.52286.760.7694 ± 0.0031
10,000 3.3134 × 10 5 ± 2.171 ± 030 ± 04.60 ± 0.5235.60 ± 0.52446.020.8147 ± 0.0048
win95pts1000 1.0749 × 10 4 ± 010 ± 033 ± 03 ± 046 ± 0253.550.7562 ± 0
3000 2.9637 × 10 4 ± 6.424.60 ± 0.5223.60 ± 0.522.60 ± 0.5230.80 ± 1.55335.520.8371 ± 0.0101
5000 4.7576 × 10 4 ± 89.145.20 ± 0.4218.20 ± 0.423.40 ± 0.5226.80 ± 0.79556.210.8569 ± 0.0049
10,000 9.3658 × 10 4 ± 03 ± 010 ± 02.64 ± 0.4915.64 ± 0.49755.730.9158 ± 0.0045
munin1000−5.6785 × 10 4 ± 6.9031 ± 2.11143 ± 2.119 ± 0183 ± 4.223383.800.5576 ± 0.0097
3000−1.4629 × 10 5 ± 39.7932 ± 1.05122 ± 014.50 ± 0.53168.50 ± 0.534388.820.5987 ± 0.0009
5000−2.2804 × 10 5 ± 340.7232 ± 2.11102 ± 2.1114 ± 0148 ± 4.225614.340.6597 ± 0.0089
10,000−4.2387 × 10 5 ± 533.2632.90 ± 2.1887.90 ± 2.186.50 ± 1.35127.30 ± 3.688676.680.7275 ± 0.0070
andes1000 9.5761 × 10 4 ± 3.2347 ± 1.0582.50 ± 1.582 ± 0131.50 ± 2.642915.820.7916 ± 0.0043
3000 2.8210 × 10 5 ± 3.3231 ± 1.0551.50 ± 1.582.50 ± 1.5885 ± 4.223349.890.8665 ± 0.0090
5000 4.6757 × 10 5 ± 1.0319.50 ± 0.5243.50 ± 0.524.50 ± 0.5267.50 ± 0.523868.220.8896 ± 0
10,000 9.3286 × 10 5 ± 0.2217.80 ± 0.7925.40 ± 0.841.20 ± 0.4244.40 ± 1.725512.320.9318 ± 0.0029
pigs1000−3.4827 × 10 5 ± 00 ± 00 ± 00 ± 00 ± 016,843.531 ± 0
3000−1.0120 × 10 6 ± 00 ± 00 ± 00 ± 00 ± 017,692.171 ± 0
5000−1.6760 × 10 6 ± 00 ± 00 ± 00 ± 00 ± 018,782.751 ± 0
10,000−3.3268 × 10 6 ± 00 ± 00 ± 00 ± 00 ± 021,748.281 ± 0
Table 8. The performance impact after parameter changes.
Table 8. The performance impact after parameter changes.
NetworkParam.100010,000
BICSHDF1BICSHDF1
alarmoriginal−1.2490 × 10 4 7.400.8874−1.1201 × 10 5 40.9348
p = 0−1.2347 × 10 4 80.8864−1.1200 × 10 5 50.9333
p = 0.2−1.2421 × 10 4 80.8764−1.1210 × 10 5 40.9247
q = 0.4−1.2394 × 10 4 80.8636−1.1201 × 10 5 40.9348
q = 0.6−1.2490 × 10 4 70.8966−1.1293 × 10 5 40.9438
nPop = 40−1.2490 × 10 4 70.8966−1.1201 × 10 5 40.9348
nPop = 60−1.2490 × 10 4 70.8966−1.1201 × 10 5 40.9348
hepar2original−3.4052 × 10 4 70.200.6124−3.3134 × 10 5 35.600.8147
p = 0−3.4001 × 10 4 790.5355−3.3125 × 10 5 430.7642
p = 0.2−3.4112 × 10 4 650.6387−3.3155 × 10 5 380.7963
q = 0.4−3.4045 × 10 4 700.6096−3.3135 × 10 5 360.8111
q = 0.6−3.4057 × 10 4 690.6237−3.3140 × 10 5 360.8186
nPop = 40−3.4046 × 10 4 710.6064−3.3134 × 10 5 350.8203
nPop = 60−3.3046 × 10 4 710.6064−3.3135 × 10 4 360.8111
Table 9. BIC scores of the algorithms for different datasets. Bold denotes the BIC score that was the best found amongst all methods. ‘*’ represents a significant difference (p < 0.05), while ‘**’ represents a very significant difference (p < 0.01).
Table 9. BIC scores of the algorithms for different datasets. Bold denotes the BIC score that was the best found amongst all methods. ‘*’ represents a significant difference (p < 0.05), while ‘**’ represents a very significant difference (p < 0.01).
SampleNetworkPC-StableGSF2SLMMHCBNC-PSOFS-SFS
1000alarm 1.2466   ×   10 4 ** 1.7902   ×   10 4 ** 1.2475   ×   10 4 ** 1.2561   ×   10 4 ** 1.2367   ×   10 4 **−1.2339 ×   10 4
hepar2 3.5318   ×   10 4 ** 3.4834   ×   10 4 ** 3.4261   ×   10 4 ** 3.4048   ×   10 4 ** 3.3994   ×   10 4 *−3.3984 ×   10 4
win95pts 1.3271   ×   10 4 ** 1.5375   ×   10 4 ** 1.0987   ×   10 4 ** 1.0917   ×   10 4 ** 1.0534   ×   10 4 **−1.0487 ×   10 4
munin- 9.8582   ×   10 4 ** 7.3424   ×   10 4 ** 8.7319   ×   10 4 ** 5.8794   ×   10 4 **−5.5655 ×   10 4
andes 9.9080   ×   10 4 ** 1.0625   ×   10 5 ** 1.0074   ×   10 5 ** 9.8045   ×   10 4 ** 9.6344   ×   10 4 **−9.5664 ×   10 4
pigs- 4.5682   ×   10 5 ** 3.5210   ×   10 5 ** 3.4827   ×   10 5 3.5032   ×   10 5 *−3.4827 ×   10 5
3000alarm 3.5083   ×   10 4 ** 5.3065   ×   10 4 ** 3.5693   ×   10 4 ** 3.6877   ×   10 4 ** 3.4722   ×   10 4 *−3.4715 ×   10 4
hepar2 1.2180   ×   10 5 ** 1.0207   ×   10 5 ** 1.0124   ×   10 5 ** 1.0030   ×   10 5 ** 1.0007   ×   10 5 **−1.0000 ×   10 5
win95pts 3.5202   ×   10 4 ** 4.4373   ×   10 4 ** 3.1103   ×   10 4 ** 3.0680   ×   10 4 ** 2.9436   ×   10 4 **−2.9352 ×   10 4
munin- 2.9331   ×   10 5 ** 1.7190   ×   10 5 ** 2.0244   ×   10 5 ** 1.4934   ×   10 5 **−1.4228 ×   10 5
andes 2.9066   ×   10 5 ** 3.1683   ×   10 5 ** 2.9970   ×   10 5 ** 2.8752   ×   10 5 ** 2.8284   ×   10 5 **−2.8225 ×   10 5
pigs−1.0120 ×   10 6 1.3594   ×   10 6 ** 1.0122   ×   10 6 **−1.0120 ×   10 6 1.0155   ×   10 6 **−1.0120 ×   10 6
5000alarm 5.7240   ×   10 4 ** 8.8094   ×   10 4 ** 5.8871   ×   10 4 ** 5.9328   ×   10 4 ** 5.7181   ×   10 4 **−5.6858 ×   10 4
hepar2 1.7224   ×   10 5 ** 1.7118   ×   10 5 ** 1.6849   ×   10 5 ** 1.6635   ×   10 5 ** 1.6592   ×   10 5 *−1.6591 ×   10 5
win95pts 5.6061   ×   10 4 ** 7.0745   ×   10 4 ** 5.0993   ×   10 4 ** 4.8629   ×   10 4 ** 4.7595   ×   10 4 **−4.7424 ×   10 4
munin- 4.9315   ×   10 5 ** 2.6622   ×   10 5 ** 3.1552   ×   10 5 ** 2.3419   ×   10 5 **−2.2742 ×   10 5
andes 4.8078   ×   10 5 ** 5.3222   ×   10 5 ** 4.9561   ×   10 5 ** 4.7403   ×   10 5 ** 4.6939   ×   10 5 **−4.6756 ×   10 5
pigs 1.6766   ×   10 6 ** 2.2608   ×   10 6 ** 1.6768   ×   10 6 ** 1.6764   ×   10 6 * 1.6768   ×   10 6 *−1.6760 ×   10 6
10,000alarm 1.1383   ×   10 5 ** 1.7479   ×   10 5 ** 1.1623   ×   10 5 ** 1.1489   ×   10 5 ** 1.1200   ×   10 5 *−1.1197 ×   10 5
hepar2 3.3786   ×   10 5 ** 3.4119   ×   10 5 ** 3.3729   ×   10 5 ** 3.3182   ×   10 5 ** 3.3125   ×   10 5 *−3.3125 ×   10 5
win95pts 1.0726   ×   10 5 ** 1.3627   ×   10 5 ** 1.0393   ×   10 5 ** 9.8666   ×   10 4 ** 9.3616   ×   10 4 **−9.3566 ×   10 4
munin- 9.8632   ×   10 5 ** 5.0656   ×   10 5 ** 9.7306   ×   10 5 ** 4.5065   ×   10 5 **−4.2976 ×   10 5
andes 9.5765   ×   10 5 ** 1.0700   ×   10 6 ** 1.0036   ×   10 6 ** 9.5184   ×   10 5 ** 9.3407   ×   10 5 **−9.3264 ×   10 5
pigs 3.3270   ×   10 6 ** 4.5233   ×   10 6 **−3.3268 ×   10 6 −3.3268 ×   10 6 3.3268   ×   10 6 −3.3268 ×   10 6
Table 10. F1 scores of the algorithms for different datasets. Bold denotes the F1 score that was the best found amongst all methods.
Table 10. F1 scores of the algorithms for different datasets. Bold denotes the F1 score that was the best found amongst all methods.
SampleNetworkPC-StableGSF2SLMMHCBNC-PSOFS-SFS
1000alarm0.88100.45160.84340.79120.86670.8764
hepar20.29270.35290.29530.54640.53850.4839
win95pts0.46540.44590.52970.54550.61460.5437
munin0.11800.04140.31290.27220.28500.3664
andes0.71790.52170.58200.57420.70640.7364
pigs0.75550.07070.97350.99750.94861
3000alarm0.95450.41790.85370.73330.87640.8989
hepar20.30940.38270.30670.56700.61540.6392
win95pts0.69320.53750.63490.68840.72730.6986
munin0.25000.06210.30000.26590.39910.4724
andes0.80680.55940.56670.69410.80660.8200
pigs10.09430.998310.97131
5000alarm0.95450.44120.85710.79570.93330.9111
hepar20.42550.42940.22670.59410.71220.7059
win95pts0.77010.56970.65260.75440.75580.7373
munin0.37690.04030.35810.32110.49150.5265
andes0.84850.52170.58820.77970.82230.8723
pigs0.99830.11780.99830.99580.99411
10,000alarm0.93180.44780.84340.86020.95560.9011
hepar20.55720.48810.22670.72040.77930.7700
win95pts0.80610.65500.62370.67560.87780.8818
munin0.53800.06040.31130.37840.50310.5756
andes0.87360.50210.32860.71010.87030.9107
pigs0.99920.103110.999211
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dang, Y.; Gao, X.; Wang, Z. Stochastic Fractal Search for Bayesian Network Structure Learning Under Soft/Hard Constraints. Fractal Fract. 2025, 9, 394. https://doi.org/10.3390/fractalfract9060394

AMA Style

Dang Y, Gao X, Wang Z. Stochastic Fractal Search for Bayesian Network Structure Learning Under Soft/Hard Constraints. Fractal and Fractional. 2025; 9(6):394. https://doi.org/10.3390/fractalfract9060394

Chicago/Turabian Style

Dang, Yinglong, Xiaoguang Gao, and Zidong Wang. 2025. "Stochastic Fractal Search for Bayesian Network Structure Learning Under Soft/Hard Constraints" Fractal and Fractional 9, no. 6: 394. https://doi.org/10.3390/fractalfract9060394

APA Style

Dang, Y., Gao, X., & Wang, Z. (2025). Stochastic Fractal Search for Bayesian Network Structure Learning Under Soft/Hard Constraints. Fractal and Fractional, 9(6), 394. https://doi.org/10.3390/fractalfract9060394

Article Metrics

Back to TopTop