1. Searching for Symmetry in the Solution of Complex Problems
What is the role of symmetry in the seemingly far away topics of solving complex applied problems by approaches offered by Soft Computing and Computational Intelligence? At first sight, it may be hard to give a direct answer. Nevertheless, there is a very important aspect that I try to explain in this little introductory study that forms a bridge between the two concepts.
When solving complex problems, setting up models for complex systems, and developing algorithms for search and optimization in such models and systems, it must be considered that such problems are intractable from the mathematical point of view. For the concept of intractability, see e.g., the very classic textbook [
1]. This means that for problems of a given type, it is impossible to give an exact or optimal solution if the size of the problem (i.e., the number of components) exceeds a usually quite low value. Researchers unfamiliar with the theory of computational complexity may reply that it is a matter of computer speed and capacity, but it is absolutely not true. It is easy to show that the “Galactic Computer”, a hypothetical computer consisting of all atoms of the Galaxy, operating with the speed of light, would not be able to exactly solve even problems of everyday life in the mathematical sense. It may be then surprising that such problems are often tackled rather efficiently by human experts, operators, or simple technical solutions. Is there any contradiction? Of course, there is none, just it must be realised that most complex problems do not need a really exact solution, but a “good enough” one, one which satisfies the expectations of the problem setter. In a general sense, such complex problems I will call “engineering problems”, even though they often come from management, economics, social sciences, and the like. In a general sense, these problems have a common feature, namely, they reflect real phenomena, where the number of components is very high (this is why they are a priori intractable), and/or they contain elements that must be considered non-deterministic from the point of view of the problem solver, moreover, they often contain uncertainty in the formulation of the problem itself from the side of the problem setter. Let us assume there is an imaginary scale, where in one pan of the scale the expectations of the problem setter, while in the other, the resources offered by the problem solver are put. How can be this scale brought in balance, with other words, how can this approach made symmetric? Of course, the next question is, symmetric, but in what sense? Can highly complex and mathematically intractable problems somehow be weighed? Can solutions be weighed? Definitely, not in the ordinary sense, but there must be some measure found that connects the two sides and that may serve as the unit which helps balance the scale. This unit or measure is referred to in the literature as cost. As a matter of course, the cost is not considered a financial matter but the amount of resources, such as capacity and speed, used, as the loss of accuracy of the solution, and there may be other points.
A study of this issue considered from a specific point of view, applying rule-based Fuzzy Systems for modelling was given in [
2]. This approach is definitely one of the key components of Computational Intelligence and it was the actual initial sub-discipline of Soft Computing, both proposed by Zadeh (cf. [
3]). There exists, however, an analytic mathematical approach that studies the question of whether an optimal solution for setting up a fuzzy rule-based model, where the cost is a combined function of the amount of needed resources (space and time complexity), and the efficiency of the method (say, the accuracy), both expressed in a merged way by a formula “total time needed”, can be found. Such an optimal solution may be considered as an idealistically symmetric solution. In our paper [
4], we found that for certain special cases, this optimum could be exactly found, while for some more complex cases, the existence of such an optimum could be proved [
4].
Nowadays, meta-heuristic methods are widely applied, and in this Special Issue, we also published a paper on the successful application of a certain novel meta-heuristics for a rather difficult problem class. Meta-heuristic approaches offer probably the best solution when NP-hard or other highly complex problems have to be solved, and nowadays, evolutionary, memetic, and other population-based approaches have really produced sometimes marvelous results. This Special Issue presents a large variety of such papers, as I will show it in the second part of the Editorial. The mentioned meta-heuristics form another very important component of Computational Intelligence. In our experience, especially, memetic algorithms produce excellent results. The concept originates from [
5], and a more recent overview was published in [
6]. The main idea is that evolutionary or population-based algorithms may be used as a “wrapper”, as a global search or optimisation technique, while a local search is conducted by some more traditional mathematical method, such as gradient-based search, for example. But the main point is in applying different algorithms for global and local search, thus speeding up and improving the accuracy of the algorithm: affecting the total cost in the pan of the computational resources of the scale. Various evolutionary algorithms have often rather different costs in this sense. An earlier comparison of some widely applied techniques was given in [
7], another one can be found in [
8].
The highly complex and mathematically intractable flow shop scheduling problem is definitely one of the rather sophisticated and complex problems, where a feasible solution needs a good meta-heuristic. In our paper, we proposed to use the new modified discrete bacterial memetic evolutionary algorithm (DBMA) where in a stricter sense, both the global and the local search are conducted by a certain meta-heuristic, the latter, namely, by Simulated Annealing. The results thus obtained proved to be better than any other approach for this problem. Here, it can be nicely pointed out that by the proper weighing of the costs, namely, the error in the accuracy of the optimization in one pan and the need for resources, especially, the running time of the optimization meta-heuristics in the other one must be brought to equilibrium, this way generating a symmetry in the solution. The exact position of the symmetrical (balanced) solution can, however, be calibrated by the designer of the solution, thus it may fit the application context of the concrete problem, considering the available resources and the expected quality of the quasi-optimum found. Thus, the asymmetric role played by the problem to solve and the model/algorithm for the solution must be balanced and, that way, the whole problem–solution complex must be brought into a symmetrical configuration. It is worthwhile mentioning that a very recently published article in the same journal tackles a similar type of highly complex problem, and proposes a rather different meta-heuristic for a solution, with some promising results [
9]. There is a certain symmetry in the problem solution itself, but the general concept of targeting symmetry of costs in the optimal solution is applicable here, too. Finally, one more closely related paper may be mentioned here [
10], where a logistic type path planning problem is tackled, although it is obvious that the solution method is easily applicable in other related fields, very likely, in VLSI design, among others. Here, the well-known Greedy Algorithm is proposed for path optimisation. Let me refer to our earlier paper where the same algorithm is applied, although, in a more complex embedding [
11]. In this paper, the problem of symmetry and balance is clearly occurring twice, hierarchically, at the level of the costs of resources and accuracy loss that was mentioned above, and at the level of balancing the adaptive scheduler costs (runtime overhead) with the overall optimisation costs of the basic problem on hand.
It is possible to introduce other types of costs when analysing CI approaches in solving complex applied problems. Once Mamdani established his fuzzy model and control algorithm [
12], an amazing explosion of applications followedThe most striking success was first observable in Japan [
13], where an incredible number of successful commercial applications emerged within less than a decade—after a decade of stagnation of fuzzy applications, following the first few attempts. What was the reason for this success? Japanese scientists agree that the transparency of the fuzzy rule-based models, the possibility of directly tuning the parameters of fuzzy controllers based on expert domain knowledge, and the lack of necessity of using complicated analytical calculations such as, e.g., Lagrange transform, opened the door to efficient control of highly non-linear and partly uncertain systems, even for engineers with a modest knowledge of control theory. Thus, transparency is one of the most important features of SC and CI approaches, which may weigh in the pan of the wage representing the symmetric approach from our point of view.
And last but not least, there is another pair of cost components that heavily weigh in the desire to establish symmetry in the intertwined system of the problem and solution complex. This pair is what we referred to as predictability, and “universality”, more precisely, general applicability of an approach, which is especially important when problems of similar type (e.g., various NP-hard discrete problems, cf. [
1]) are in the focus of the solution. Sometimes, if there is a “guarantee” for obtaining a reasonably good solution for a particular concrete problem, where the expected time and space costs can be well estimated for arbitrary size, the applier is happier than when having a sometimes more efficient, but for some topologies, problem sub-classes, or certain large sizes inapplicable algorithm. This is rather typical for the metaheuristics which occur in a considerable number in this Special Issue. Our very paper here, optimising job scheduling came to life from the starting assumption that Discrete Bacterial Memetic Evolutionary Algorithm had its “universal” applicability. There are plenty of references in the paper showing evidence for this approach being quite well applicable in many different discrete problem groups. So, it was worthwhile to try—and we obtained good results, better than other authors so far!
Summarising the above thoughts, in one pan we collect the costs of space and time complexity (resource intensity), the overhead in the case of hierarchically built-up algorithms, and the lack of transparency, predictability, and general applicability, while in the other pan, there is the expectation of the applier, the accuracy, the feasibility, and similar components. As well, the solution provider attempts to find a well-balanced, in other words, symmetrical solution.
2. Let Us Quickly Overview Now the Contents of This Special Issue
The three main pillars of CI/SC are Fuzzy Systems (FS), Artificial Neural Networks/Connectionist Systems (NN), and Evolutionary/Population-Based Algorithms (EA), which the latter of which includes Swarm Intelligence as well.
Although recently, the number of papers published in the fuzzy field seems to be braked a little, in this Special Issue, there are five papers, roughly one quarter, applying this by now well-established branch of non-conventional mathematics. Cruz-Aguilar et al. propose a combined Failure Mode and Effect Analysis and fuzzy method for the non-invasive measurement of methane and carbon dioxide. A. Łyczkowska-Hanćkowiak applies trapezoidal fuzzy numbers in portfolio analysis. A connected topic is K. Piasecki et al.’s paper on present value evaluation by oriented fuzzy numbers. One of the guest editors, I. Harmati, discusses the dynamics of fuzzy-rough cognitive networks. Finally, M. Holčapek et al. deal with fuzzy interpolation using extensional fuzzy numbers.
The situation is similar with Artificial Neural Networks. The number of related papers is five or even seven (counting two connected to the EA pillar as well) In a broader sense, the connectionist hybrid approach by Y. Zhao et al. on Key performance indicator (KPI) anomaly detection applies bi-directional long short-term memory replacing a traditional feedforward NN. H. Achicanoy et al. discuss the matter of generating synthetic images by applying StyleGAN, which latter reliably attributes every generated image to a particular network. E. Jeczmionek et al. deal with layer pruning in Convolutional Neural Networks. P. Li et al. discuss text summarisation based on the Dynamic Memory Network. Z. Xiao et al. apply a special NN for image processing: lung segmentation. S. Zeybek et al. combine the population-based Bees Algorithm for training recurrent NNs. At last, E. Kaya et al.’s work hybridises ANN and EA in their nonlinear system identification approach.
The last main group of papers deals with some version of the EA approach. In addition to the above-mentioned two hybrid NN and EA methods, eight further articles fall into this category. Z. H. Chin et al. apply a Genetic Algorithm (GA) for calculating Proof-of-Work blockchains. L. Wang et al. deal with Android malware detection, deploying a self-variant GA. A. H. G. Ruiz with co-authors also apply GA, for energy saving in an air-conditioning system. S. Nantogma and co-authors propose the use of artificial immune-based algorithms for learning in air-defence systems. A. Agárdi et al., including the present Guest Editor, offer a hybrid, Bacterial Evolutionary and Simulated Annealing memetic algorithm for the so far most efficient solution to the Job Scheduling Problem. Another hybrid approach is proposed by M. Zhang and co-authors, the combination of Butterfly and Particle Swarm Optimisation in the presence of high dimensionality. The paper by H. El Raoui et al. discusses the very important general problem of using meta-heuristics in problem-solving. This topic reflects the thoughts in the first part of this Editorial.
At last, a paper authored by K. K. Sharma et al. applying modified spectral clustering for the prediction of customer churn may be mentioned, the latter being an alternative technique for machine learning.
I am convinced the Reader will find a number of extremely interesting and thought-provoking ideas in this rich collection of articles.