Next Article in Journal
A Fuzzy Approach to Support Evaluation of Fuzzy Cross Efficiency
Next Article in Special Issue
A Hybrid Discrete Bacterial Memetic Algorithm with Simulated Annealing for Optimization of the Flow Shop Scheduling Problem
Previous Article in Journal
New Constraints on Lorentz Invariance Violation from Combined Linear and Circular Optical Polarimetry of Extragalactic Sources
Previous Article in Special Issue
Extractive Summarization Based on Dynamic Memory Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamics of Fuzzy-Rough Cognitive Networks

by
István Á. Harmati
Department of Mathematics and Computational Sciences, Széchenyi István University, 9026 Győr, Hungary
Symmetry 2021, 13(5), 881; https://doi.org/10.3390/sym13050881
Submission received: 1 April 2021 / Revised: 2 May 2021 / Accepted: 11 May 2021 / Published: 15 May 2021
(This article belongs to the Special Issue Computational Intelligence and Soft Computing: Recent Applications)

Abstract

:
Fuzzy-rough cognitive networks (FRCNs) are interpretable recurrent neural networks, primarily designed for solving classification problems. Their structure is simple and transparent, while the performance is comparable to the well-known black-box classifiers. Although there are many applications on fuzzy cognitive maps and recently for FRCNS, only a very limited number of studies discuss the theoretical issues of these models. In this paper, we examine the behaviour of FRCNs viewing them as discrete dynamical systems. It will be shown that their mathematical properties highly depend on the size of the network, i.e., there are structural differences between the long-term behaviour of FRCN models of different size, which may influence the performance of these modelling tools.

1. Introduction

Artificial Intelligence (AI) models and methods are parts of our lives. However, most of the AI techniques are blackboxes in the sense, that they do not explain how and why they arrived at a specific conclusion. Explainable AI try to overcome this situation, developing models with interpretable semantics and transparency [1,2]. Fuzzy Cognitive Maps (FCMs) are one of the earliest EAI models, introduced by B. Kosko [3]. FCMs are recurrent neural networks employing weighted causal relation between the model’s concepts. Due to their modelling ability and interpretability, these models have a wide range of application [4,5].
Although FCMs have an enormous number of applications, only a few studies are devoted to the analytical and not empirical discussion of their behaviour. Boutalis et al. [6] examined the existence and uniqueness of fixed points of FCMs. Lee and Kwon studied the stability of FCMs using Lyapunov method [7]. Knight et al. [8] analyzed FCMs with linear and sigmoid transfer functions. In [9], the authors generalized the findings of [6] to FCMs with arbitrary sigmoid function. All of these studies arrived to the conclusion (although in a different form) that when the parameter of the sigmoid threshold function is small enough, then the FCM converges to a unique fixed point, regardless of the initial activation values.
The hybridisation of rough set theory and fuzzy set theory [10,11] provide a not only promising, but fruitful combination of different methods of handling and modelling uncertainty [12,13].
The application areas encompass a wide variety of sciences, so only a few of them are mentioned here, without the need for completeness. Fuzzy rough sets and fuzzy rough neural networks have been applied in feature selection problems [14], evolutionary fuzzy rough neural networks have been developed for stock prediction [15]. Fuzzy rough set models are used in multi-criteria decision-making in [16]. Classification tasks have been solved by fuzzy rough granular neural networks in [17]. The combination of unsupervised convolutional neural networks and fuzzy-rough C-mean was used effectively for clustering of large-scale image dataset in [18]. The environmental impact of a renewable energy system was estimated using fuzzy rough sets in [19]. The interval-valued fuzzy-rough based Delphi method was applied for evaluating the siting criteria of offshore wind farms in [20]. Another current direction is the fusion of neutrosophic theory and rough set theory [21]. An example of its application is the emission-based prioritization of bridge maintenance projects [22]. Nevertheless, in the aspect of this paper, fuzzy rough granular networks [23] are the most exciting applications of the synergy of fuzzy and rough theories.
Granular Computing [24] uses information granules, such as classes, clusters, subsets etc., just like humans do. Granular Neural Networks (GNNs) [25] make a synergy between the celebrated neural networks and granular computing [26]. Rough Cognitive Networks (RCN) are GNNs, introduced by Nápoles et al. [27], combining the abstract semantic of the three-way decision model with the neural reasoning mechanism of Fuzzy Cognitive Maps for addressing numerical decision-making problems. The information space is discretized (granulated) by using Rough Set Theory [28,29], which has many other interesting applications [30,31,32,33]. According to simulation results, RNN was capable to outperform standard classifiers. On the other hand, learning the similarity threshold parameter had significant computational cost.
Rough Cognitive Ensembles (RCEs) was proposed to overcome this computational burden [34]. It employs a collection of Rough Cognitive Networks as base classifiers, each operating at a different granularity level. This allows suppressing the requirement of learning a similarity threshold. Nevertheless, this model is still very sensitive to the similarity threshold upon which the rough information granules are built.
Fuzzy-Rough Cognitive Maps (FRCNs) has been introduced by Nápoles et al. [35]. The main feature of FRCNs is that the crisp information granules are replaced with fuzzy-rough granules. Based on simulation results, FRCNs show performance comparable to the best blackbox classifiers.
Vanloffelt et al. [36] studied the contributions of building blocks to the FRCNs’ performance via empirical simulations with several different network topologies. They concluded that the connections between positive neurons might not necessary to maintain the performance of FRCNs. The theoretical study by Concepción et al. [37] discussed the contribution of negative and boundary neurons. Moreover, they arrived at the conclusion that negative neurons have no impact on the decision, and the ranking between positive neuron remains invariant during the whole reasoning process.
Besides the results presented in [37], this paper was motivated by the fact that only a few studies are discussing the behaviour of cognitive networks from the strict mathematical point of view. Nevertheless, such studies may provide us with information about what we can or cannot achieve with these models. Analyzing the behaviour and contribution of the building blocks unveils the exact role of components of the complex structure: which part is crucial and which one is unnecessary etc. In this paper, we do not develop, neither implement another new fuzzy-rough model. Instead, we analyze the behaviour of FRCNs, which are comparable in performance to the best black-box classifiers. Because of their proven competitiveness [35], there is no need for further model verification and validation.
In the current paper, the dynamical behaviour of fuzzy-rough cognitive networks is examined. The main contributions are the following: first, we show that stable positive neurons have at most two different activation values for any initial activation vectors. Then we show that a certain point with equal coordinates (called trivial fixed point) is always a fixed point, nevertheless, not always a fixed point attractor. Furthermore, a condition for the existence of a unique, globally attractive fixed point is also stated. Complete analysis of the dynamics of positive neurons for two and three decision classes is provided. Finally, we show that for a higher number of classes, the occurrence of limit cycles is a necessity and the vast majority of initial activation values lead to oscillation. The rest of the paper is organized as follows. In Section 2, we recall the construction of fuzzy-rough cognitive maps and overview the existing results about their behaviour. In Section 3, a summary of the mathematical background necessary for further investigation of the dynamics of FRCNs is provided, including contraction mapping and elements of bifurcation theory. Section 4 presents general result about the dynamics of positive neurons, condition for unique fixed point attractor and refinement of some findings of [37]. Section 5 introduces size-specific results for FRCNs, providing a complete description for the case of two and three decision classes and pointing out that over a specific size, oscillation behaviour will be naturally present. Section 6 discusses the relation of the behaviour of positive neurons to the final decision class of FRCNs. The paper ends with a short conclusion in Section 7.

2. Fuzzy-Rough Cognitive Networks

In this section, we shortly summarize the basic notions of fuzzy-rough cognitive networks and the findings about their dynamical properties reported in the literature. It is based on the works [35,36,37].

2.1. Construction of FRCNs

Building up FRCNs includes the following three steps: information space granulation, network construction and finally, network exploitation.
Information space granulation means dividing available information into granules. Let U denote the universe of discourse and X = { X 1 , , X N } , X U . Then X c U is the subset containing all objects assigned (labelled) to decision class D c . The membership degree of x U to X c is computed in a binary way:
μ X c ( x ) = 1 , x X c 0 , x X c .
Membership function μ P ( y , x ) is the next component, using similiraty degree between two instances x and y:
μ P ( y , x ) = μ X c ( x ) φ ( x , y ) = μ X c ( x ) ( 1 δ ( x , y ) ) ,
where μ P : U × U [ 0 , 1 ] is the membership degree of y to X c , given that x belongs to X c (in this sense, it is a conditional membership degree). It is composed by the previously defined binary membership function μ X c ( x ) and the similarity degree φ ( x , y ) . The later is based on a normalized distance measure δ ( x , y ) . Membership functions for the lower and upper approximation for any fuzzy set X c , respectively:
μ P ( X c ) ( x ) = min μ X c ( x ) , inf y U I ( μ P ( y , x ) , μ X c ( x ) ) ,
μ P ( X c ) ( x ) = max μ X c ( x ) , sup y U T ( μ P ( y , x ) , μ X c ( x ) ) .
Here I denotes an implication function and T denotes a conjunction function. As a next step, we calculate the membership functions associated to the positive, negative and boundary regions:
μ P O S ( X c ) ( x ) = μ P ( X c ) ( x ) ,
μ N E G ( X c ) ( x ) = 1 μ P ( X c ) ( x ) ,
μ B N D ( X c ) ( x ) = μ P ( X c ) ( x ) μ P ( X c ) ( x ) .
After determining the membership functions of the decision classes, we construct a network using four type of neurons:
  • D = { D 1 , , D N } is the set of decision neurons,
  • P = { P 1 , , P N } is the set of positive neurons,
  • N = { N 1 , , N N } is the set of negative neurons,
  • B = { B 1 , , B N } is the set of boundary neurons.
The recurrent neural network have N output neurons (decision neurons). The number of input neuron is between 2 N and 3 N , depending on the number of non-empty boundary regions. The ‘wiring’ between the neurons is based on the following steps (see Algorithm 1). Positive, negative and boundary neurons influence themselves with intensity 1. Each positive neuron influence the decision neuron related to it with intensity 1, moreover these neurons act on the other positive and decision neurons with intensity 1 . Finally, if two decision classes share non-empty boundary regions, then each boundary neuron influences both decision neurons with intensity 0.5. As it is clear from this setting, the weights of a FRCN are not determined by learning methods, they are based on the semantic relations between the information granules.
Algorithm 1: The construction procedure of the fuzzy-rough cognitive network.
Symmetry 13 00881 i001
The computation of initial activation values ( A i ( 0 ) s) of the neurons is based in the similiraty degree between the new object y and all x U , and the membership degree of every x to positive, negative and boundary regions. The initial activation value of decision neurons is zero.
Positive   neurons   ( P i ) :           A i ( 0 ) = x U T ( φ ( x , y ) , μ P O S ( X c ) ( x ) ) x U μ P O S ( X c ) ( x )
Negative   neurons   ( N i ) :           A i ( 0 ) = x U T ( φ ( x , y ) , μ N E G ( X c ) ( x ) ) x U μ N E G ( X c ) ( x )
Boundary   neurons   ( B i ) :           A i ( 0 ) = x U T ( φ ( x , y ) , μ B N D ( X c ) ( x ) ) x U μ B N D ( X c ) ( x )
After determining the initial values, the next step is network exploitation, by the iteration rule
A i ( t + 1 ) = f j = 1 N w i j A j ( t ) .
or in matrix vector form A ( t + 1 ) = f ( W A ( t ) ) , where f is understood coordinate-wise. Here w i j is the weight of the connection from neuron C j to neuron C i . Function f ( x ) = 1 1 + e λ x keeps the values in the ( 0 , 1 ) range. For high λ values, f ( x ) tends to behave like the step function, while for very small λ values, the activation values converge to a unique fixed point attractor, regardless of the initial activation values, producing the same output for every input values. The λ = 5 choice is widely used in FCM applications, FRCNs employ this parameter value, too. The iteration stops when
  • activation values converge to a fixed point attractor (practically it means that the consecutive values do not show reasonable change ), or
  • a predefined maximal number of iteration is reached.
Finally, the label of the decision neuron with highest activation value is assigned the classified object. Figure 1 shows the FRCN’s structure for a binary classification problem.

2.2. Preliminary Results on the Dynamics of FRCNs

In this subsection, we shortly summarize the main contributions of [37] regarding the dynamics of fuzzy-rough cognitive networks. For detailed explanations and proofs, see [37].
  • Negative and boundary neurons always converge to a unique fixed value. This value depends on parameter λ , but the convergence and uniqueness are independent of λ .
  • The ranking between positive neurons remains invariant during the recurrent reasoning. As we will see in Section 5, although this statement is absolutely true in the strict mathematical sense, from the practical point if view, in some cases it has very limited applicability.
  • In an FRCN with N decision classes, there will always be N 1 positive neurons with activation values less than or equal to 1 / 2 (after at least one iteration step).
  • In an FRCN there will be at most one neuron with activation value higher or equal to 1 / 2 (after at least one iteration step).
Consider the updating rule A i ( t + 1 ) = f j = 1 N w i j A j ( t ) (see Equation (11)), where f is the sigmoid function with parameter λ . Assuming that positive neurons reach stable states, for every i it holds, that P i ( t + 1 ) = P i ( t ) = f j = 1 N w i j P j ( t ) . Recall that each positive neuron is influenced by the other positive neurons with weight 1 , while influences itself with weight 1 (i.e., if i = j , then w i j = 1 , otherwise w i j = 1 ). Consequently, we have
P i ( t ) = P i ( t + 1 ) = 1 1 + e λ P i ( t ) j = 1 , j i N P j ( t ) = 1 1 + e λ 2 P i ( t ) j = 1 N P j ( t ) .
Now let us further investigate the equation P i ( t ) = 1 1 + e λ 2 P i ( t ) j = 1 N P j ( t ) . Expressing j = 1 N P j ( t ) we get the following equation:
j = 1 N P j ( t ) = 1 λ ln 1 P i ( t ) P i ( t ) + 2 P i ( t ) ,
on the left hand side we find the sum of all activation values, which depends on the value of P i ( t ) (recall that P i ( t ) is the stabilized activation value of any positive neuron). The sum of the activation values is a function of P i ( t ) , of course. Define the following real function:
s ( x ) = 1 λ ln 1 x x + 2 x .
From the behaviour of s ( x ) , the authors derived some properties of the positive neurons:
  • If λ 2 , then s ( x ) is monotone decreasing, thus a specific value can be produced by a single input value x. It means that if λ 2 and the positive neurons are stable (converge to a fixed point), then they have the same activation value.
  • If λ > 2 , then s ( x ) produces the same value for at most three different input values in ( 0 , 1 ) . It means that if the vector of positive neurons converges to a fixed point, then this vector has at most three different coordinate values.
In Section 4 and Section 5, we refine these statements and introduce some additional results regarding the dynamics of FRCNs.

3. Mathematical Background

The updating rule of FRCNs suggest to handle them as discrete dynamical systems. In this section, we briefly summarize the most important notions and methods used in the forthcoming sections.

3.1. Contraction Mapping

In the network exploitation phase, we apply the iteration rule Equation (11) again and again until the activation values stabilize (or the number of iterations reaches the predefined maximum). If the activation values stabilize (i.e., arrive an equilibrium state), then the difference between the outputs of two consecutive iteration steps will be smaller and smaller. In other words, the iteration contracts a subset of the state space (or the whole state space) into a single point. The following definition provides a strict mathematical description of this property.
Definition 1
(see [38], p. 220). Let ( X , d ) be a metric space, with metric d. If φ maps X into X and if there is a number c < 1 such that
d φ ( x ) , φ ( y ) c d ( x , y )
for all x , y X , then φ is said to be a contraction of X into X.
If the iteration reaches an equilibrium state, then this state will not change by applying the updating rule. Mathematically speaking, it is a fixed point of the mapping generating the iteration. The following famous theorem establishes connection between the contraction mapping and a unique fixed point.
Theorem 1
(Contraction mapping theorem or Banach’s fixed point theorem, see [38], pp. 220–221). If X is a complete metric space, and if φ is a contraction of X into X, then there exists one and only one x X such that φ ( x ) = x .
The proof to this theorem is constructive (see [38], p. 221) and offers a straightforward way to find this unique fixed point. We only have to pick an arbitrary element of X and apply mapping φ again and again. The limit will be the unique fixed point of φ : x n = φ ( x n 1 ) , then lim n x n = x and φ ( x ) = x .
An arbitrary fixed point x is said to be asymptotically stable if starting the iteration close enough to x , the limit will be x . Moreover, the fixed point x is said to be globally asymptotically stable if starting the iteration from any element of the state space, the limit will be x . Based on Theorem 1 and its constructive proof, it is clear that the unique fixed point of a contraction is globally asymptotically stable.

3.2. Elements of Bifurcation Theory

The dynamics of a discrete dynamical system may change, if its parameters are varied. A qualitative change of the dynamical behaviour, e.g., a transition from a unique stable fixed point to multiple fixed points or oscillation, is called bifurcation and the corresponding critical parameter is called bifurcation point.
The detailed description of the dynamics of FRCNs requires the application of the elements of bifurctaion theory. Here only the most important notions are listed. For more details, see [39,40].
Consider a discrete-time dynamical system depending on only one parameter λ ( G : R n R n ):
x k + 1 = G ( x k , λ ) , x k R n , λ R ,
where function G is smooth with respect to x and λ . Let’s assume that x 0 is a fixed point of the mapping with parameter λ 0 . The local stability (resistance agianst small perturbations) of the fixed point depends on the eigenvalues of the Jacobian evaluated at x 0 ( J G ( x 0 ) ). If the Jacobian has no eigenvalues on the unit circle, x 0 is said to be hyperbolic. Hyperbolic fixed points can be categorized according to the eigenvalues of J G ( x 0 ) :
  • If all of the eigenvalues lie inside the unit circle (i.e., the absolute value of the eigenvalues is less than one), then it is an asymptotically stable fixed point. In other words, this fixed point attracts the space in every direction in its (sometimes very small) neighborhood. Consequently, its basin of attraction is an n dimensional subset of then n dimensional space.
  • If there are some eigenvalues (at least one) with absolute value greater than one, and there are some (at least one) eigenvalues with absolute value less than one, then it is a saddle point. It means that this fixed point may attract points of the space in some, but not every direction in its neighborhood. Consequently, the dimension of its basin of attraction is less than n.
  • If all of the eigenvalues has absolute value greater than one, then it is an unstable fixed point (repeller fixed point).
In the FCM theory and similary in FRCN terminology, stable fixed point is a fixed point, that can be a limit of the iteration. In this sense, stable and saddle fixed points are considered as stable in the FCM (FRCN) sense. Nevertheless, their dynamical behavior could be very different. We will see in Section 5 that different type of fixed points have significantly different size of basin of attractions. In other words, some fixed points are less important than others.
The simplest ways the hyperbolicity can be violeted are the following:
  • A simple positive eigenvalue crosses the unit circle at α = 1 . The bifurcation associated to this scenario is called saddle-node bifurcation. It is a birth of new fixed points.
  • A simple negative eigenvalue crosses the unit circle at α = 1 . This causes period-doubling bifurcation, the birth of limit cycles.
  • A pair of complex conjugate eigenvalues approaches the unit circle at α 1 , 2 = e i ϕ , ϕ ( 0 , π ) . This is the so-called Neimark-Sacker bifurcation, causes the occurence of a closed invariant curve.
It will be shown in Section 5, that depending on the size of the FRCN, different types of bifurcations determine the main dynamics of the system.

4. General Results on the Dynamics of Positive Neurons

In this section, we introduce some general results about the dynamics of positive neurons. Size specific results are presented in Section 5.
With start with the refinement of a result from [37]. Further investigation of the function s ( x ) (see Equation (14)) provides more information about the possible fixed points of positive neurons (see Figure 2). It has been shown that there are at most one positive neuron with activation value higher than 1 / 2 . If s ( x ) > 1 , then a specific value may be produced by at most three different values of x. But two of these values are higher than 1 / 2 , thus only one of them get a role in the activation vector. It means that one coordinate of the activation vector is greater than 1 / 2 and the remaining ones are less than 1 / 2 and they have equal values.
Consider now the case, when s ( x ) < 1 . Observe that the graph of s ( x ) is symmetrical about the point ( 1 / 2 , 1 ) (it can be easily verified analytically). Let us choose a specific value γ = s ( x ) , such that the horizontal line y = γ has three intersection points with the graph. Denote the first coordinates of theses point by x 1 , x 2 , x 3 ( x 1 < x 2 < x 3 ). Using the symmetry of the function, we can conclude that x 1 > 1 x 3 . Consequently, x 1 + x 3 > 1 . Since the sum of the activation values is less than 1 ( s ( x ) < 1 ), it follows that there can be at most two different values ( x 1 and x 2 ) in the activation vector for any given s ( x ) .
Summarizing this short argument, if the activation vector of positive neurons converges to a fixed point, then it may have at most two different coordinate values.
The iteration rule for the updating of the neurons’ activation values has the form A ( t + 1 ) = f W A ( t ) , where the sigmoid is applied coordinate-wise and W is the connection matrix of the network. Based on the construction of the network (see Algorithm 1), it has the following block form:
W = neuron sets D B N P D B N P O W B I W D O I O O O O I O O O O W P ,
where O, I denote zero matrix and identity matrix, respectively. W B describes the connections from boundary to decision neurons (it contains 0 s and 0.5 s, their positions depend on non-empty boundary regions), W P describes the connections between positive neurons (if i = j , then w i j = 1 , else w i j = 1 ), W D = W P contains the connections from positive neurons to decision neurons.
Because of the upper-diagonal block structure, instead of dealing with the whole matrix, we can use the blocks. It has been proved in [37] that activation values of the negative and boundary neurons converge to the same unique value, which depends on λ , but independent of the initial activation values. Positive neurons influence themselves, each other and decision neurons, but do not receive input from other set of neurons. Their activation values are propagated to the decision neurons. In a long run, when neurons reach a stable state (or the iteration is stopped by achieving the maximal number of iterations), the propagated value is their stable (or final) state. In the following, we examine the long-term behaviour of positive neurons.
Lemma 1.
For every λ > 0 and every number of decision classes N, there always exists a fixed point of the positive neurons, whose coordinates are the same. Nevertheless, this fixed point is not always a fixed point attractor.
Proof. 
Consider the fixed point equation for every 1 j N :
P = f W P P
P 1 P N = f W P P 1 P N
In coordinate-wise form:
P j = 1 1 + e λ P j i = 1 , j j P i = 1 1 + e λ 2 P j i = 1 P i
If P j = x for every 1 j N , then it simplifies to the following equation:
x = 1 1 + e λ 2 N x = 1 1 + e λ N 2 x
We show that there always exists a unique solution to this equation. Let us introduce the function
g ( x ) = x 1 1 + e λ N 2 x .
Function g ( x ) is continuous and differentiable, moreover g ( 0 ) = 0.5 < 0 and g ( 1 ) = 1 1 1 + e λ N 2 > 0 , thus it has at least one zero in ( 0 , 1 ) . According to Rolle’s theorem, between two zeros of a differentiable function its derivative has a zero. The derivative is
g ( x ) = 1 + λ N 2 e λ N 2 x ( 1 + e λ N 2 x ) 2 ,
which is always positive. It means that there is exactly one zero of g ( x ) in ( 0 , 1 ) . Consequently, we have shown that for any given λ > 0 and N, there is exactly one fixed point of the positive neurons with equal coordinates. There may be other fixed points, but their coordinates are not all the same. □
The following lemma plays a crucial role in the proof of Theorem 2 and in the examination of the Jacobian of the iteration mapping.
Lemma 2.
Let W P be an N × N matrix with the following entries:
w i j = 1 if   i = j 1 if   i j
Then the eigenvalues of W P are 2 N (with multiplicity one) and 2 (with multiplicity N 1 ).
Proof. 
Basic linear algebra. □
Theorem 2.
Consider a fuzzy cognitive map (recurrent neural network) with sigmoid transfer function ( f ( x ) = 1 / ( 1 + e λ x ) ) and with weight matrix W p whose entries are
w i j = 1 if   i = j 1 if   i j
If
λ < 4 max { 2 , | 2 N | } ,
then it has exactly one fixed point. Moreover, this fixed point is a global attractor, i.e., the iteration starting from any initial activation vector ends at this point.
Proof. 
We are going to show that if the condition in theorem is fulfilled, then the mapping P f ( W P P ) is contraction, thus according to Banach’s theorem, it has exactly one fixed point and this fixed point is globally asymptotically stable, i.e., iterations starting from any initial vectors arrive to this fixed point. Let us choose two different initial vectors, P and P . Then
f ( W P P ) f ( W P P ) 2 = i = 1 N f j = 1 N w i j p j f j = 1 N w i j p j 2 1 / 2
i = 1 N λ 4 j = 1 N w i j p j j = 1 N w i j p j 2 1 / 2 = λ 4 i = 1 N j = 1 N w i j ( p j p j ) 2 1 / 2
= λ 4 W p ( P P ) 2 = λ 4 W p ( P P ) 2 P P 2 P P 2
λ 4 W p 2 P P 2
Here the first inequality comes from the fact that the derivative of the sigmoid function f ( x ) is less than or equals λ / 4 and f ( x ) is Lipschitzian, while the second inequality comes from the definition of the induced matrix norm. Since W p is a real, symmetric matrix its spectral norm ( 2 ) equals the maximal absolute values of its eigenvalues. By Lemma 2, W p 2 = max { 2 , | 2 N | } . According to the definition of contraction (Equation (15)), if the coefficient of P P 2 is less than one, then the mapping is a contraction and by Theorem 1 it has exactly one fixed point and this fixed point is globally asymptotically stable. The inequality in the Theorem comes by a simple rearranging:
λ 4 W p 2 = λ 4 max { 2 , | 2 N | } < 1 λ < 4 max { 2 , | 2 N | } .
The immediate corollary of Theorem 2 and Lemma 1 is if there is a unique globally attracting fixed point, then its coordinates are equal. We will refer to fixed point with equal coordinates as trivial fixed point. The whole complex behaviour of positive neurons (and in such a way, fuzzy-rough cognitive networks) evolves from this trivial fixed point via bifurcations (see the flowchart Figure 3 for the way to the first bifurcation). In Section 5, we show that different size of FRCNs (different number of decision classes N) may show significantly different qualitative behaviour.

5. Dynamics of Positive Neurons

First we provide the Jacobian at the trivial fixed point. In general (except the case N = 2 ), this fixed point is a function of λ and N. Let us denote the coordinates of the trivial fixed point by p . Then the ( i , j ) entry of the Jacobian of the mapping f ( W p P ) at this point, using the fact that f ( W p P ) = p and for the sigmoid function f = λ f ( 1 f ) :
f p i | P = P = λ f ( W p P ) ( 1 f ( W p P ) ) w i j = λ p ( 1 p ) w i j
The whoole Jacobian matrix evaluated at the trivial fixed point is the following:
J p = λ p ( 1 p ) W p .
Its eigenvalues are λ p ( 1 p ) times the eigenvalues of W p : ( 2 N ) λ p ( 1 p ) and 2 λ p ( 1 p ) . As the value of λ increases, at a certain point the absolute value of the eigenvalue with the highest modulus reaches one, the trivial fixed point loses its global stability and a bifurcation occurs. The type of this bifurcation has great effect on the further evolution and dynamics of the system. Based on the eigenvalues of W P , we see that Neimark-Sacker bifurcation does not occur here, but saddle-node and perido-doubling bifurcations do play an important role.

5.1. N = 2

Consider first the case when we have only two decision classes. The relations between the positive neurons can be seen in Figure 1. The weight matrix describing the connections is the following (subscript P refers to positive):
W P = 1 1 1 1
Easy to check the the point [ 0.5 , 0.5 ] T is always a fixed point of the mapping f ( W p P ) = P , since
1 1 1 1 0.5 0.5 = 0 0
and f ( 0 ) = 0.5 . According to Theorem 2, if λ < 2 , then it is the only fixed point, moreover it is globally asymptotically stable, i.e., strating from any initial activation vector, the iteration will converge to this fixed point. The Jacobian of the mapping at this fixed point is
λ · 0.5 ( 1 0.5 ) ] 1 1 1 1 = λ 4 1 1 1 1
and its eigenvalues are 0 and λ / 2 . When the eigenvalue λ / 2 = 1 ( λ = 2 ), a bifurcation occurs, giving birth to two new fixed points. In the following, we are going to show that for every λ > 2 , there are exactly three fixed points, moreover these fixed points have the following coordinates: [ 0.5 , 0.5 ] T , [ x , 1 x ] and [ 1 x , x ] , where x is a fixed point of a one dimensional mapping described below.
Let us assume that ( x 1 , x 2 ) T is a fixed point of the mapping, then
x 1 x 2 = f 1 1 1 1 x 1 x 2 = f ( x 1 x 2 ) f ( x 2 x 1 )
Since f is the sigmoid function, we have that f ( x ) = 1 f ( x ) , consequently
x 1 = f ( x 1 x 2 ) ,
x 2 = f ( x 2 x 1 ) = f ( ( x 1 x 2 ) ) = 1 f ( x 1 x 2 ) = 1 x 1 .
So for a fixed point the coordinates are ( x 1 , 1 x 1 ) . The first equation leads to the following fixed point equation:
x 1 = f ( 2 x 1 1 ) .
It means that the FPs of the positive neurons can be determined by solving Equation (39). From the graphical point of view, easy to see that if λ 2 , then it has exactly one solution ( x 1 = 0.5 ), but if λ > 2 , then there are three different solutions: 0.5 , x and 1 x (see Figure 4).
From the analytical viewpoint, we have to solve the equation
x = 1 1 + e λ ( 2 x 1 ) .
Applying the inverse of f ( x ) and rearranginig the terms:
1 λ ln 1 x x + 2 x = 1
As it was pointed out in [37], if λ > 2 , then the left hand side has local minimum at 1 2 λ 2 4 λ less than one, and local maximum at 1 2 + λ 2 4 λ greater than one. If λ 2 , then the function is strictly monotone decreasing. Using continuity of the function, we conclude that there are exactly three solutions for every λ > 2 and there is a unique solution if λ 2 (see Figure 5).
For λ = 5 , the fixed points are ( 0.5 , 0.5 ) , ( 0.0072 , 0.9928 ) and ( 0.9928 , 0.0072 ) .
Let us examine the basins of attraction for the three different fixed points, i.e., λ > 2 and the fixed points are [ 0.5 , 0.5 ] T , [ x , 1 x ] and [ 1 x , x ] , with x > 1 / 2 . Consider a point ( x 1 , x 2 ) T as initial activation vector (see Figure 6).
  • If x 1 = x 2 , then the iteration leads to the fixed point ( 0.5 , 0.5 ) , since
    f 1 1 1 1 x 1 x 2 = f ( x 1 x 2 ) f ( x 2 x 1 ) = f ( 0 ) f ( 0 ) = 0.5 0.5 .
  • If x 1 > x 2 , then f ( x 1 x 2 ) > f ( 0 ) > f ( x 2 x 1 ) , so this ordering remains invariant during the iteration process. Moreover, after the first iteration step it reduces to a one dimensional iteration with initial value x = f ( x 1 x 2 ) > 1 / 2 and updating equation x = f ( 2 x 1 ) . If x > 1 / 2 , then fixed point x attracts this one-dimensional iteration. Consequently, the original two-dimensional itertaion converges to ( x , 1 x ) T .
  • Similarly, if x 1 < x 2 , then the iteration ends in ( 1 x , x ) T .
The size of the basin of attraction can be considered as the number of its points. In the strict mathematical sense it is infinity, of course. On the other hand, the basin of fixed point ( 0.5 , 0.5 ) is a one dimensional object (line segment), while the basins of ( x , 1 x ) and ( 1 x , x ) are two-dimensional sets (triangles), so they are ‘much more bigger’ sets.
In applications, we always work with sometimes large, but finite numbers of points, based on the required and available precision. Let us define the level of granularity as the length of subintervals, when we divide the unit interval into n equal parts. Then the division points are 0 , 1 / n , 2 / n , , 1 , so we have n + 1 points. The basin of fixed point ( 0.5 , 0.5 ) contains n + 1 points, while the basins of the two other fixed points have n ( n + 1 ) / 2 , n ( n + 1 ) / 2 points. By increasing the number of division points, the proportion of the basins tend to zero and 1 / 2 , as it was expected.
lim n n + 1 ( n + 1 ) 2 = 0
lim n n ( n + 1 ) / 2 ( n + 1 ) 2 = 1 2
In a certain sense, it means that fixed points ( x , 1 x ) T and ( 1 x , x ) T are more important than fixed point ( 0.5 , 0.5 ) T , since much more initial activation values lead to these points.

5.2. N = 3

The structure of the connections between positive neurons can be seen in Figure 7. In this case, the eigenvalues of W P are 1 (with multiplicity one) and 2 (with multiplicity two). The fixed point with equal coordinates loses its global asymptotic stability when the absolute value of its larger eigenvalue equals one. Since the positive eigenvalue has the higher absolute value, this bifurcation results in new fixed points. Nevertheless, this eigenvalue has multiplicity two, so it is not a simple bifurcation, i.e., not only a pair of new fixed points arise, but a couple of new FPs. The trivial fixed point becomes a saddle point, i.e., it attracts points in a certain direction, but repells them in other directions. If we further increase the value of the parameter λ , then the absolute value of the negative eigenvalue reaches one and the trivial fixed point suffers a bifurcation again. Since the eigenvalue is 1 , this is a period-doubling bifurcation, giving birth to a two-period limit cycle.
We show that there are three type of fixed points:
  • The trivial fixed point with equal coordinates ( F P 0 );
  • Fixed points with one high and two low values ( F P 1 );
  • Fixed points with one low and two medium coordinates ( F P 2 ).
The existence of F P 0 is clear, as it was shown by Lemma 1. As it was pointed out in Section 4, the non-trivial fixed points have two different coordinate values. Let us denote these values by x, x and y. Then the fixed point equation is
x x y = f 1 1 1 1 1 1 1 1 1 x x y = f ( y ) f ( y ) f ( y 2 x ) ,
which simplifies to the following system of equations:
x = f ( y )
y = f ( y 2 x )
By substituting x, we have
y = f ( y 2 x ) = f ( y 2 f ( y ) ) = 1 1 + e λ ( y 2 f ( y ) ) = 1 1 + e λ y 2 1 1 + e λ y
It is again a fixed point equation, whose number of solutions depends on the value of parameter λ (see Figure 8):
  • if λ 2.2857 (rounded), then there is exactly one solution, it refers to the trivial fixed point ( F P 0 );
  • if λ > 2.2857 , then there are three different solutions, one refers to F P 0 , one to F P 1 s and one to F P 2 .
Using the values of y, we can determine the values of x. Furthermore, if y is high, then based on the equation x = f ( y ) = 1 f ( y ) , we may conclude that x is low. Similarly, if y is low, then x is medium. Finally, there are seven fixed points:
  • the fixed point with equal coordinates ( F P 0 );
  • three fixed points with one high and two low values ( F P 1 );
  • three fixed points with one low and two medium values ( F P 2 s).
For λ = 5 , these fixed points are (rounded to four decimals)
  • ( 0.2355 , 0.2355 , 0.2355 ) ( F P 0 ),
  • ( 0.9926 , 0.0069 , 0.0069 ) and its permutations ( F P 1 ),
  • ( 0.4904 , 0.4904 , 0.0076 ) and its permutations ( F P 2 ).
Initial activation values lead to these fixed points according to the following:
  • if P 1 0 = P 2 ( 0 ) = P 3 ( 0 ) , then the iteration converges to F P 0 ;
  • if P 1 0 > P 2 ( 0 ) P 3 ( 0 ) , then the iteration converges to F P 1 ;
  • if P 1 0 = P 2 ( 0 ) > P 3 ( 0 ) , then the iteration converges to F P 2 .
Ranking is preserved between positive neurons, in the sense that if P x ( 0 ) P y ( 0 ) , then P x P y . Since the number of possible outcomes is very limited (only three cases without permutations), it means that some differences in the initial activation values will magnified, for example if the initial activation vector is ( 0.3 , 0.2 , 0.1 ) , then the iteration converges to ( 0.9926 , 0.0069 , 0.0069 ) . On the other hand, some large differences will be hidden: initial activation vector ( 1 , 0.95 , 0 ) leads again to ( 0.9926 , 0.0069 , 0.0069 ) .
Limit cycle occurs, when the negative eigenvalue of the Jacobian computed at the trivial fixed point reaches 1 (at about λ = 5.8695 ). Similarly to the trivial fixed point, the elements of the limity cycle have equal coordinates. Let us denote these points by ( x 1 , x 1 , x 1 ) and ( x 2 , x 2 , x 2 ) . The members of a two-period limit cycle are fixed points of the double iterated function.
x 1 x 1 x 1 = f W P x 2 x 2 x 2 and x 2 x 2 x 2 = f W P x 1 x 1 x 1
In coordinate-wise form this provides the following system of equations:
x 1 = f ( x 2 )
x 2 = f ( x 1 )
From which we have
x 2 = f ( x 1 ) = f ( f ( x 2 ) ) = 1 1 + e λ f ( x 2 ) = 1 1 + e λ 1 1 + e λ x 2
If λ 5.8695 , then it has a unique solution. It refers to the trivial fixed point, since this point is a fixed point of the double-iterated function, too. If λ > 5.8695 , then there are two other solutions, low and medium, these are the coordinates of the two-period limit cycle. For example, for λ = 7 , these points are 0.0563 and 0.4027 .
For a general case, basins of attraction of a dynamical systems are difficult to determine and sometimes it is analytically not feasible task [41,42], enough to mention the famous graph of Newton’s method’s basin ofattractions [43]. We examined the basin of attraction of the fixed points by putting an equally spaced grid on the set of possible initial values of positive neurons P 1 , P 2 and P 3 , and applied the grid points as initial activation values. Table 1 shows the sizes of the basins of attraction for different granurality. Results are visualized in Figure 9 and Figure 10.

5.3. N 4

If the FRCN has N = 4 decision classes, then the eigenvalues of the Jacobian at the trivial fixed point have the same magnitude, but with different sign. So positive and negative eigenvalues reaches one (in absolute value) at the same value of parameter λ (see Figure 11), causing the appearance of new fixed points and limit cycle simultaneuosly. The trivial fixed point is no longer an attractor, but fixed points with pattern one high, three low and two medium, two low values do exists.
If N 5 , then the absolute value of the negative eigenvalue of the Jacobian evaluated at the trivial fixed point is higher then the positive value, consequently first a period-doubling bifurcation occurs and the trivial fixed point loses its attractiveness. We should note that occurence of fixed points of type F P 1 and F P 2 is not linked to the other (positive) eigenvalue, since they occur earlier, for a smaller value of λ .
For the general case (N decision classes), there exist two types of fixed points with the following patterns: one high, N 1 low values and two medium, N 2 low values. The fixed point equations for the one high ( x 1 ), N 1 low ( x 2 ) pattern
x 1 = f ( x 1 ( N 1 ) x 2 )
x 2 = f ( x 1 ( N 3 ) x 2 ) ,
which leads to the following one-dimensional fixed point problem:
x 1 = ( N 1 ) f 4 2 N N 1 + N 3 N 1 f 1 ( x 1 ) + f 1 ( x 1 )
The pattern two medium ( x 1 ), N 2 low ( x 2 ) values leads to the following equations:
x 1 = f ( ( N 2 ) x 2 )
x 2 = f ( 2 x 1 ( N 4 ) x 2 ) ,
from which we get the a one-dimensional fixed point problem:
x 2 = 1 N 4 f 1 ( x 2 ) 2 N 4 f ( N 2 ) x 2
Nevertheless, these fixed points are less important for multiple decision classes.
Finally, we provide a geometrical reasoning of the structure of fixed points. Consider two fixed points of type F P 1 , i.e., they have one high and N 1 low coordinates. Their basins of attractions are separated by a set, whose points do belong to none of them, but lie on an N 1 dimensional hyperplane ‘between’ them. Without loss of generality, we may assume that one fixed point is P 1 = ( α , β , , β ) and the other one is P 2 = ( β , α , β , , β ) . Because of symmetry, the hyperplane is perpendicular the line connecting P 1 and P 2 , i.e., its normal vector is paralel to P 1 P 2 = ( β α , α β , 0 , , 0 ) . Additionally, the hyperplane crosses the line at the middle point of P 1 and P 2 , which has coordinates α + β 2 , α + β 2 , β , , β . Consequently, the equation of the separating hyperplane is x 1 = x 2 . The separating set is a subset of this plane with the additional constrain x i < x 1 , for every i 1 , 2 . Consequently, a fixed point of type F P 2 has two medium coordinates with equal values ( x 1 = x 2 ), and N 2 equal, but low coordinates. Since there are N fixed points of type F P 1 , there are ( N 2 ) fixed points of type F P 2 .
Simulation results show, that for N 4 , limit cycles tend to steal the show. Limit cycle oscillates between two activation vectors with equal coordinates (the equality of the coordinates is an immediate consequence of symmetry). Let us denote these points by ( x 1 , , x 1 ) and ( x 2 , , x 2 ) . The members of a two-period limit cycle are fixed points of the double iterated function.
x 1 x 1 = f W P x 2 x 2 and x 2 x 2 = f W P x 1 x 1
In coordinate-wise form this gives the following system of equations:
x 1 = f ( ( N 2 ) x 2 )
x 2 = f ( ( N 2 ) x 1 )
From which we have
x 2 = f ( ( N 2 ) x 1 ) = f ( ( N 2 ) f ( ( N 2 ) x 2 ) ) = 1 1 + e λ ( N 2 ) 1 1 + e λ ( N 2 ) x 2 .
For λ values generally applied in fuzzy cognitive maps and for λ = 5 used in FRCNs, this fixed point equation has three solutions: one refers to the trivial fixed point, which is no longer a fixed point attractor, the other two (low and medium) are coordinates of the elements of the limit cycle (see Figure 12).
Simulation results show that by increasing the number of decision classes, more and more initial values arrive to a limit cycle. We put equally spaced N dimensional grid on the set [ 0 , 1 ] N with stepsize 0.5 , 0.25 , 0.2 and 0.1 , then applied the gridpoints as initial activation values of the positive neurons. Since any particular real-life dataset finally turns into initial activation values for the FRCN model, it means that these gridpoints can be viewed as representations of possible datasets, up to the predefined precision (i.e., step size). Iteration stopped when convergence or limit cycle was detected or the predefined maximum numbers of steps reached. As we can observe in Table 2, most of the initial values finally arrive to a limit cycle.

6. Relation to Decision

It has been proved previously, that values of negative and boundary neurons converge to the same value (it is ≈ 0.9930 for λ = 5 ) and the dynamical behaviour of positive neurons was analyzed in the preceding sections. Now we examine their effect on the decision neurons and such a way, on the final decision.
Decision neurons have only input values, they do not influence each other, neither other types of neurons. As a consequence, the sigmoid transfer function f ( x ) only transform their values into the ( 0 , 1 ) interval, but does not change the order of the values (with respect to the ordering relation ‘≤’), since f ( x ) is strictly monotone increasing. Before analyzing the effects of the results of the previous sections, we briefly summarize the conclusion of [37]: assuming that the activation values of positive neurons reach a stable state, they concluded that negative neurons have no influence on FRCNs’ performance, but the ranking of positive neurons’ activation values and the number of boundary neurons connected to each decision neuron have high impact. Based on the previous sections, below we ad some more insights to this result.
If positive neurons reach a stable state (fixed point), then this stable state have either the pattern one high and N 1 low values ( F P 1 ) or two medium and N 2 low values ( F P 2 ), the trivial fixed point with equal coordinates ( F P 0 ) plays a rule only for 2 and 3 decision classes. These values are unique and completely determined by the parameter λ and the number of decision classes N. It means that the number of possible final states is very limited. This fact was mentioned in the case of N = 3 decision classes, but valid for every N 3 cases. Namely, small differences between the initial activation vales could be magnified by the exploitation phase. Almost equal initial activation values with proper maximum lead to the pattern of one high and N 1  low values, resulting in something like a winner-takes-all rule. Although the runner-up has only a little smaller initial value, after reaching the stable state, it needs the same number of boundary connections to overcome the winner, as the one with very low initial value needs.
If the maximal number of iterations is reached without convergence (i.e., the activation value vector oscillates in a limit cycle), then the iteration is stopped and the last activation vector is taken. It has either ( l o w , , l o w ) or ( m e d i u m , , m e d i u m ) pattern with equal coordinates. In this case, the positive neurons have absolutely no effect on the final decision. The classification goes to the neuron with the highest number of boundary connections, regardless of the small or large differences between the initial activation values of the positive neurons.

7. Conclusions and Future Work

The behaviour of fuzzy-rough cognitive networks was studied applying the theory of discrete dynamical systems and their bifurcations. The dynamics of negative and positive neurons was fully discussed in lthe iterature, so we focused on the behaviour of positive neurons. It was pointed out, that the number of fixed points is very limited and their coordinate values follow a specific pattern ( F P 0 , F P 1 , F P 2 ). Additionally, it was proved that when the number of decision classes is greater than three, the limit cycles unavoidably occur, causing the recurrent reasoning inconclusive. Simulations show that proportion of initial activation values leading to limit cycles increases with the number of decision classes, and the waste number of scenarios lead to oscillation. In this case, the decision relies totally on the number of boundary neuron connected to each decision neurons, regardless of the initial activation value of positive neurons.
The method applied in the paper may be followed in the analysis of other FCM-like models. As we have seen, if the parameter of the sigmoid threshold function is small enough, then an FCM has one and only one fixed point, which is globally asymptotically stable. If we increase the value of the parameter, then a fixed-point bifurcation occurs, causing an entirely different dynamical behaviour. If the weight matrix has a nice structure, for example in the case of positive neurons, then there is a chance to find the unique fixed point in a simple form or as a limit of a lower-dimensional iteration and determine the parameter value at the bifurcation point. Similarly, based on the eigenvalues of the Jacobian evaluated at this fixed point we can determine the type of bifurcation. Nevertheless, general FCMs have no well-structured weight matrices, since weights are usually determined by human experts or learning methods. It means some limitations on the generalization of the method applied. Theoretically, we can find unique fixed points and the bifurcation point, but this task is much more difficult for a general weight matrix.
Another exciting and important research direction is the possible generalization of the results to the extensions of fuzzy cognitive maps. Some well-known extensions are fuzzy grey cognitive maps (FGCMs) [44], interval-valued fuzzy cognitive maps (IVFCMs) [45], intuitionistic fuzzy cognitive maps (IFCMs) [46,47], temporal IFCMs [48], the combination of fuzzy grey and intuitionistic FCMs [49], interval-valued intuitionistic fuzzy cognitive maps (IVIFCMs) [50]. This future work probable requires deep mathematical inspection on interval-valued dynamical systems and may lead to several new theoretical and practical results on interval-valued cognitive networks, as well.

Funding

This research was supported by National Research, Development and Innovation Office (NKFIH) K124055.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Samek, W.; Müller, K.R. Towards explainable artificial intelligence. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning; Springer: Berlin, Germany, 2019; pp. 5–22. [Google Scholar]
  2. Došilović, F.K.; Brčić, M.; Hlupić, N. Explainable artificial intelligence: A survey. In Proceedings of the 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Opatija, Croatia, 21–25 May 2018; pp. 0210–0215. [Google Scholar]
  3. Kosko, B. Fuzzy cognitive maps. Int. J. Man Mach. Stud. 1986, 24, 65–75. [Google Scholar] [CrossRef]
  4. Papageorgiou, E.I.; Salmeron, J.L. A review of fuzzy cognitive maps research during the last decade. IEEE Trans. Fuzzy Syst. 2012, 21, 66–79. [Google Scholar] [CrossRef]
  5. Szwed, P. Classification and feature transformation with Fuzzy Cognitive Maps. Appl. Soft Comput. 2021, 107271. [Google Scholar] [CrossRef]
  6. Boutalis, Y.; Kottas, T.L.; Christodoulou, M. Adaptive Estimation of Fuzzy Cognitive Maps With Proven Stability and Parameter Convergence. IEEE Trans. Fuzzy Syst. 2009, 17, 874–889. [Google Scholar] [CrossRef]
  7. Lee, I.K.; Kwon, S.H. Design of sigmoid activation functions for fuzzy cognitive maps via Lyapunov stability analysis. IEICE Trans. Inf. Syst. 2010, 93, 2883–2886. [Google Scholar] [CrossRef] [Green Version]
  8. Knight, C.J.; Lloyd, D.J.; Penn, A.S. Linear and sigmoidal fuzzy cognitive maps: An analysis of fixed points. Appl. Soft Comput. 2014, 15, 193–202. [Google Scholar] [CrossRef]
  9. Harmati, I.Á.; Hatwágner, M.F.; Kóczy, L.T. On the existence and uniqueness of fixed points of fuzzy cognitive maps. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems; Springer: Berlin, Germany, 2018; pp. 490–500. [Google Scholar]
  10. Zadeh, L.A. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1978, 1, 3–28. [Google Scholar] [CrossRef]
  11. Zadeh, L.A. Fuzzy sets and information granularity. Adv. Fuzzy Set Theory Appl. 1979, 11, 3–18. [Google Scholar]
  12. Lingras, P.; Jensen, R. Survey of rough and fuzzy hybridization. In Proceedings of the 2007 IEEE International Fuzzy Systems Conference, London, UK, 23–26 July 2007; pp. 1–6. [Google Scholar]
  13. Dubois, D.; Prade, H. Rough fuzzy sets and fuzzy rough sets. Int. J. Gen. Syst. 1990, 17, 191–209. [Google Scholar] [CrossRef]
  14. Ji, W.; Pang, Y.; Jia, X.; Wang, Z.; Hou, F.; Song, B.; Liu, M.; Wang, R. Fuzzy rough sets and fuzzy rough neural networks for feature selection: A review. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2021, 11, e1402. [Google Scholar] [CrossRef]
  15. Cao, B.; Zhao, J.; Lv, Z.; Gu, Y.; Yang, P.; Halgamuge, S.K. Multiobjective evolution of fuzzy rough neural network via distributed parallelism for stock prediction. IEEE Trans. Fuzzy Syst. 2020, 28, 939–952. [Google Scholar] [CrossRef]
  16. Zhang, K.; Zhan, J.; Wu, W.Z. Novel fuzzy rough set models and corresponding applications to multi-criteria decision-making. Fuzzy Sets Syst. 2020, 383, 92–126. [Google Scholar] [CrossRef]
  17. Pal, S.K.; Ray, S.S.; Ganivada, A. Classification using fuzzy rough granular neural networks. In Granular Neural Networks, Pattern Recognition and Bioinformatics; Springer: Berlin, Germany, 2017; pp. 39–76. [Google Scholar]
  18. Riaz, S.; Arshad, A.; Jiao, L. Fuzzy rough C-mean based unsupervised CNN clustering for large-scale image data. Appl. Sci. 2018, 8, 1869. [Google Scholar] [CrossRef] [Green Version]
  19. Li, C.; Wang, N.; Zhang, H.; Liu, Q.; Chai, Y.; Shen, X.; Yang, Z.; Yang, Y. Environmental impact evaluation of distributed renewable energy system based on life cycle assessment and fuzzy rough sets. Energies 2019, 12, 4214. [Google Scholar] [CrossRef] [Green Version]
  20. Deveci, M.; Özcan, E.; John, R.; Covrig, C.F.; Pamucar, D. A study on offshore wind farm siting criteria using a novel interval-valued fuzzy-rough based Delphi method. J. Environ. Manag. 2020, 270, 110916. [Google Scholar] [CrossRef]
  21. Zhang, C.; Li, D.; Kang, X.; Song, D.; Sangaiah, A.K.; Broumi, S. Neutrosophic fusion of rough set theory: An overview. Comput. Ind. 2020, 115, 103117. [Google Scholar] [CrossRef]
  22. Gokasar, I.; Deveci, M.; Kalan, O. CO2 Emission based prioritization of bridge maintenance projects using neutrosophic fuzzy sets based decision making approach. Res. Transp. Econ. 2021, 101029. [Google Scholar] [CrossRef]
  23. Ganivada, A.; Dutta, S.; Pal, S.K. Fuzzy rough granular neural networks, fuzzy granules, and classification. Theor. Comput. Sci. 2011, 412, 5834–5853. [Google Scholar] [CrossRef] [Green Version]
  24. Pedrycz, W. Granular Computing: An Emerging Paradigm; Springer Science & Business Media: Berlin, Germany, 2001; Volume 70. [Google Scholar]
  25. Pedrycz, W.; Vukovich, G. Granular neural networks. Neurocomputing 2001, 36, 205–224. [Google Scholar] [CrossRef]
  26. Falcon, R.; Nápoles, G.; Bello, R.; Vanhoof, K. Granular cognitive maps: A review. Granul. Comput. 2019, 4, 451–467. [Google Scholar] [CrossRef]
  27. Nápoles, G.; Grau, I.; Papageorgiou, E.; Bello, R.; Vanhoof, K. Rough cognitive networks. Knowl.-Based Syst. 2016, 91, 46–61. [Google Scholar] [CrossRef]
  28. Pawlak, Z. Rough sets. Int. J. Comput. Inf. Sci. 1982, 11, 341–356. [Google Scholar] [CrossRef]
  29. Pawlak, Z. Vagueness and uncertainty: A rough set perspective. Comput. Intell. 1995, 11, 227–232. [Google Scholar] [CrossRef] [Green Version]
  30. Pawlak, Z. Rough set theory and its applications. J. Telecommun. Inf. Technol. 2002, 7–10. [Google Scholar]
  31. Skowron, A.; Dutta, S. Rough sets: Past, present, and future. Nat. Comput. 2018, 17, 855–876. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Li, X.; Luo, C. An intelligent stock trading decision support system based on rough cognitive reasoning. Expert Syst. Appl. 2020, 160, 113763. [Google Scholar] [CrossRef]
  33. Zhang, H.Y.; Yang, S.Y. Three-way group decisions with interval-valued decision-theoretic rough sets based on aggregating inclusion measures. Int. J. Approx. Reason. 2019, 110, 31–45. [Google Scholar] [CrossRef]
  34. Nápoles, G.; Falcon, R.; Papageorgiou, E.; Bello, R.; Vanhoof, K. Rough cognitive ensembles. Int. J. Approx. Reason. 2017, 85, 79–96. [Google Scholar] [CrossRef]
  35. Nápoles, G.; Mosquera, C.; Falcon, R.; Grau, I.; Bello, R.; Vanhoof, K. Fuzzy-rough cognitive networks. Neural Netw. 2018, 97, 19–27. [Google Scholar] [CrossRef]
  36. Vanloffelt, M.; Nápoles, G.; Vanhoof, K. Fuzzy-Rough Cognitive Networks: Building Blocks and Their Contribution to Performance. In Proceedings of the 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), Boca Raton, FL, USA, 16–19 December 2019; pp. 922–928. [Google Scholar]
  37. Concepción, L.; Nápoles, G.; Grau, I.; Pedrycz, W. Fuzzy-Rough Cognitive Networks: Theoretical Analysis and Simpler Models. IEEE Trans. Cybern. 2020. [Google Scholar] [CrossRef]
  38. Rudin, W. Principles of Mathematical Analysis, 3rd ed.; McGraw-Hill: New York, NY, USA, 1976. [Google Scholar]
  39. Kuznetsov, Y.A. Elements of Applied Bifurcation Theory; Springer Science & Business Media: Berlin, Germany, 2013; Volume 112. [Google Scholar]
  40. Wiggins, S. Introduction to Applied Nonlinear Dynamical Systems and Chaos; Springer Science & Business Media: Berlin, Germany, 2003; Volume 2. [Google Scholar]
  41. Schultz, P.; Menck, P.J.; Heitzig, J.; Kurths, J. Potentials and limits to basin stability estimation. New J. Phys. 2017, 19, 023005. [Google Scholar] [CrossRef]
  42. Giesl, P. On the determination of the basin of attraction of discrete dynamical systems. J. Differ. Equations Appl. 2007, 13, 523–546. [Google Scholar] [CrossRef]
  43. Susanto, H.; Karjanto, N. Newton’s method’s basins of attraction revisited. Appl. Math. Comput. 2009, 215, 1084–1090. [Google Scholar] [CrossRef] [Green Version]
  44. Salmeron, J.L. Modelling grey uncertainty with fuzzy grey cognitive maps. Expert Syst. Appl. 2010, 37, 7581–7588. [Google Scholar] [CrossRef]
  45. Hajek, P.; Prochazka, O. Interval-valued fuzzy cognitive maps for supporting business decisions. In Proceedings of the 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Vancouver, BC, Canada, 24–29 July 2016; pp. 531–536. [Google Scholar]
  46. Papageorgiou, E.I.; Iakovidis, D.K. Intuitionistic fuzzy cognitive maps. IEEE Trans. Fuzzy Syst. 2012, 21, 342–354. [Google Scholar] [CrossRef]
  47. Hadjistoykov, P.; Atanassov, K. Remark on intuitionistic fuzzy cognitive maps. Notes Intuit. Fuzzy Sets 2013, 19, 1–6. [Google Scholar]
  48. Hadjistoykov, P.P.; Atanassov, K.T. On temporal intuitionistic fuzzy cognitive maps. Comptes Rendus L’Acad. Bulg. Des Sci. Sci. Math. Nat. 2014, 67, 1233–1240. [Google Scholar]
  49. Hajek, P.; Froelich, W.; Prochazka, O. Intuitionistic fuzzy grey cognitive maps for forecasting interval-valued time series. Neurocomputing 2020, 400, 173–185. [Google Scholar] [CrossRef]
  50. Hajek, P.; Prochazka, O. Interval-valued intuitionistic fuzzy cognitive maps for supplier selection. In International Conference on Intelligent Decision Technologies; Springer: Cham, Switzerland, 2017; pp. 207–217. [Google Scholar]
Figure 1. The structure of a fuzzy-rough cognitive network for binary classification.
Figure 1. The structure of a fuzzy-rough cognitive network for binary classification.
Symmetry 13 00881 g001
Figure 2. The graph of s ( x ) = 1 λ ln 1 x x + 2 x , with λ = 5 . Observe that the curve is symmetrical about the point ( 1 / 2 , 1 ) .
Figure 2. The graph of s ( x ) = 1 λ ln 1 x x + 2 x , with λ = 5 . Observe that the curve is symmetrical about the point ( 1 / 2 , 1 ) .
Symmetry 13 00881 g002
Figure 3. Flowchart of the way to first bifurcation. The trivial fixed point loses its global stability when the eigenvalues of the Jacobian (evaluated at the trivial fixed point) reach ± 1 and the first bifurcation occurs. As parameter λ increases, the absolute values of the eigenvalues increase. Due to the symmetry of the weight matrix, we do not have to count with complex eigenvalues.
Figure 3. Flowchart of the way to first bifurcation. The trivial fixed point loses its global stability when the eigenvalues of the Jacobian (evaluated at the trivial fixed point) reach ± 1 and the first bifurcation occurs. As parameter λ increases, the absolute values of the eigenvalues increase. Due to the symmetry of the weight matrix, we do not have to count with complex eigenvalues.
Symmetry 13 00881 g003
Figure 4. The graph of the function f ( 2 x 1 ) . It has exactly one fixed point if λ 2 and three fixed points for λ > 2 . One of them is unstable, referring to the trivial fixed point of positive neurons, the other two fixed points are stable.
Figure 4. The graph of the function f ( 2 x 1 ) . It has exactly one fixed point if λ 2 and three fixed points for λ > 2 . One of them is unstable, referring to the trivial fixed point of positive neurons, the other two fixed points are stable.
Symmetry 13 00881 g004
Figure 5. Possible values of the positive neurons in function of parameter λ , for N = 2 decision classes. Green line denotes the value of the trivial fixed point, while orange and blue denote the high and low value of the other two fixed points. Observe the pitchfork bifurcation at λ = 2 .
Figure 5. Possible values of the positive neurons in function of parameter λ , for N = 2 decision classes. Green line denotes the value of the trivial fixed point, while orange and blue denote the high and low value of the other two fixed points. Observe the pitchfork bifurcation at λ = 2 .
Symmetry 13 00881 g005
Figure 6. Basin of attraction of the fixed points for N = 2 decision classes and λ = 5 in the ( P 1 , P 2 ) plane. Fixed point attractors are denoted by large black dots.
Figure 6. Basin of attraction of the fixed points for N = 2 decision classes and λ = 5 in the ( P 1 , P 2 ) plane. Fixed point attractors are denoted by large black dots.
Symmetry 13 00881 g006
Figure 7. Topology of the connections between the positive neurons in case of N = 3 decision classes. Self connections have weight 1, other connections have weight 1 .
Figure 7. Topology of the connections between the positive neurons in case of N = 3 decision classes. Self connections have weight 1, other connections have weight 1 .
Symmetry 13 00881 g007
Figure 8. Possible values of positive neurons in function of parameter λ , for N = 3 decision classes.
Figure 8. Possible values of positive neurons in function of parameter λ , for N = 3 decision classes.
Symmetry 13 00881 g008
Figure 9. Basins of attraction of all the fixed points of the positive neurons for λ = 5 , N = 3 in the ( P 1 , P 2 , P 3 ) space. The fixed points are ( 0.2355 , 0.2355 , 0.2355 ) , ( 0.4904 , 0.4904 , 0.0076 ) and its permutation, ( 0.9926 , 0.0069 , 0.0069 ) and its permutations. The fixed points are denoted by large dots.
Figure 9. Basins of attraction of all the fixed points of the positive neurons for λ = 5 , N = 3 in the ( P 1 , P 2 , P 3 ) space. The fixed points are ( 0.2355 , 0.2355 , 0.2355 ) , ( 0.4904 , 0.4904 , 0.0076 ) and its permutation, ( 0.9926 , 0.0069 , 0.0069 ) and its permutations. The fixed points are denoted by large dots.
Symmetry 13 00881 g009
Figure 10. Basin of attraction of trivial fixed point ( 0.2355 , 0.2355 , 0.2355 ) and fixed points with two medium and one low values ( 0.4904 , 0.4904 , 0.0076 ) and its permutation in the ( P 1 , P 2 , P 3 ) space. The fixed points are denoted by large dots.
Figure 10. Basin of attraction of trivial fixed point ( 0.2355 , 0.2355 , 0.2355 ) and fixed points with two medium and one low values ( 0.4904 , 0.4904 , 0.0076 ) and its permutation in the ( P 1 , P 2 , P 3 ) space. The fixed points are denoted by large dots.
Symmetry 13 00881 g010
Figure 11. Eigenvalues of Jacobian evaluated at the trivial fixed point. Orange: positive eigenvalue, blue: absolute value of the negative eigenvalue. Curves are slightly shifted for visibility in N = 4 .
Figure 11. Eigenvalues of Jacobian evaluated at the trivial fixed point. Orange: positive eigenvalue, blue: absolute value of the negative eigenvalue. Curves are slightly shifted for visibility in N = 4 .
Symmetry 13 00881 g011
Figure 12. Fixed point of the double-iterated function (Equation (62)), for N = 6 and λ = 5 . At the middle fixed point, the value of the derivative is greater than one, so it is a repelling fixed point. The two other fixed points, with low and medium values are stable.
Figure 12. Fixed point of the double-iterated function (Equation (62)), for N = 6 and λ = 5 . At the middle fixed point, the value of the derivative is greater than one, so it is a repelling fixed point. The two other fixed points, with low and medium values are stable.
Symmetry 13 00881 g012
Table 1. Number of points in basin of attraction in percentage of the total number of points for N = 3 decision classes, λ = 5 . F P 0 refers to fixed point with equal coordinates, F P 1 refers to fixed points with one high and two low values, while F 2 refers to fixed points with two medium and one low values.
Table 1. Number of points in basin of attraction in percentage of the total number of points for N = 3 decision classes, λ = 5 . F P 0 refers to fixed point with equal coordinates, F P 1 refers to fixed points with one high and two low values, while F 2 refers to fixed points with two medium and one low values.
Number of ClassesGranuralityFP0FP1FP2LCTotal Number of Points
N = 3 0.511.1155.5633.330 3 3
0.254.0072.0024.000 5 3
0.22.7876.3920.830 6 3
0.10.8386.7812.390 11 3
0.050.2392.976.800 21 3
0.010.0198.521.470 101 3
Table 2. Number of points in basin of attraction in percentage of the total number of points, for different number of classes (N) and level of granularity, λ = 5 . F P 1 refers to fixed points with one high and N 1 low values, while F 2 refers to fixed points with two medium and N 2 low values, L C stands for limit cycle.
Table 2. Number of points in basin of attraction in percentage of the total number of points, for different number of classes (N) and level of granularity, λ = 5 . F P 1 refers to fixed points with one high and N 1 low values, while F 2 refers to fixed points with two medium and N 2 low values, L C stands for limit cycle.
Number of ClassesGranuralityFP0FP1FP2LCTotal Number of Points
N = 4 0.5039.5114.8145.68 3 4
0.25048.649.6041.76 5 4
0.2045.686.0248.30 6 4
0.1048.773.3647.87 11 4
0.05050.341.4148.25 21 4
N = 5 0.5024.698.2367.08 3 5
0.25027.520.9671.52 5 5
0.2023.212.0674.73 6 5
0.1017.340.5082.16 11 5
0.05016.240.1483.62 21 5
N = 6 0.5013.994.1281.89 3 6
0.2507.220.2992.49 5 6
0.207.520.1392.35 6 6
0.104.750.0495.21 11 6
N = 7 0.507.361.9290.72 3 7
0.2502.720.0897.20 5 7
0.201.900.0398.07 6 7
0.101.130.0098.87 11 7
N = 8 0.503.660.0096.34 3 8
0.2500.950.0099.05 5 8
0.200.600.0099.40 6 8
0.100.030.0099.97 11 8
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Harmati, I.Á. Dynamics of Fuzzy-Rough Cognitive Networks. Symmetry 2021, 13, 881. https://doi.org/10.3390/sym13050881

AMA Style

Harmati IÁ. Dynamics of Fuzzy-Rough Cognitive Networks. Symmetry. 2021; 13(5):881. https://doi.org/10.3390/sym13050881

Chicago/Turabian Style

Harmati, István Á. 2021. "Dynamics of Fuzzy-Rough Cognitive Networks" Symmetry 13, no. 5: 881. https://doi.org/10.3390/sym13050881

APA Style

Harmati, I. Á. (2021). Dynamics of Fuzzy-Rough Cognitive Networks. Symmetry, 13(5), 881. https://doi.org/10.3390/sym13050881

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop