Next Article in Journal
A New Extension of CJ Metric Spaces—Partially Controlled J Metric Spaces
Next Article in Special Issue
A Fuzzy Multi-Criteria Evaluation System for Share Price Prediction: A Tesla Case Study
Previous Article in Journal
A Surface Family with a Mutual Geodesic Curve in Galilean 3-Space G3
Previous Article in Special Issue
An Overview of Applications of Hesitant Fuzzy Linguistic Term Sets in Supply Chain Management: The State of the Art and Future Directions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Method for Commercial-Scale Water Purification Selection Using Linguistic Neural Networks

1
Department of Mathematics, Abdul Wali Khan University Mardan, Mardan 23200, Pakistan
2
Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(13), 2972; https://doi.org/10.3390/math11132972
Submission received: 5 April 2023 / Revised: 16 May 2023 / Accepted: 31 May 2023 / Published: 3 July 2023
(This article belongs to the Special Issue Advances in Fuzzy Logic and Artificial Neural Networks)

Abstract

:
A neural network is a very useful tool in artificial intelligence (AI) that can also be referred to as an ANN. An artificial neural network (ANN) is a deep learning model that has a broad range of applications in real life. The combination and interrelationship of neurons and nodes with each other facilitate the transmission of information. An ANN has a feed-forward neural network. The neurons are arranged in layers, and each layer performs a particular calculation on the incoming data. Up until the output layer, which generates the network’s ultimate output, is reached, each layer’s output is transmitted as an input to the subsequent layer. A feed-forward neural network (FFNN) is a method for finding the output of expert information. In this research, we expand upon the concept of fuzzy neural network systems and introduce feed-forward double-hierarchy linguistic neural network systems (FFDHLNNS) using Yager–Dombi aggregation operators. We also discuss the desirable properties of Yager–Dombi aggregation operators. Moreover, we describe double-hierarchy linguistic term sets (DHLTSs) and discuss the score function of DHLTSs and the distance between any two double-hierarchy linguistic term elements (DHLTEs). Here, we discuss different approaches to choosing a novel water purification technique on a commercial scale, as well as some variables influencing these approaches. We apply a feed-forward double-hierarchy linguistic neural network (FFDHLNN) to select the best method for water purification. Moreover, we use the extended version of the Technique for Order Preference by Similarity to Ideal Solution (extended TOPSIS) method and the grey relational analysis (GRA) method for the verification of our suggested approach. Remarkably, both approaches yield almost the same results as those obtained using our proposed method. The proposed models were compared with other existing models of decision support systems, and the comparison demonstrated that the proposed models are feasible and valid decision support systems. The proposed technique is more reliable and accurate for the selection of large-scale water purification methods.

1. Introduction

1.1. A Brief Review of the Development of Neural Networks and Their Types

Classical statistical methods have been widely used in various industries for decades, particularly in fields such as quality control, experimental design, and process optimization. However, in recent years, neural networks (NNs) [1] have emerged as powerful tools for solving complex problems in various fields, including finance, engineering [2], medicine [3], and computer science [4]. NNs have gained popularity due to their ability to handle large and complex data sets, learn patterns and relationships in the data, and make accurate predictions or classifications. In comparison to classical statistical methods [5], NNs have the advantage of being able to model nonlinear relationships and capture complex interactions between variables. In the field of pattern recognition, NNs have been shown to outperform classical statistical methods, particularly when dealing with complex and high-dimensional data. In prediction and classification tasks, NNs have also been successful, achieving high accuracy rates in fields such as image [6,7] and speech recognition [8], natural language processing [9,10], and sentiment analysis [11,12]. Overall, while classical statistical methods remain useful in many applications, NNs have become a popular and powerful tool for solving complex problems in various fields. Regression [13] and NNs are often seen as competing model-building methods, as they can both be used for modeling and predicting outcomes based on input variables. However, while regression is a linear method that requires the assumption of linearity between the input and output variables, NNs are capable of modeling nonlinear relationships and do not require such assumptions. The structure and operation of the brain served as the first inspiration for neural networks, which are created to resemble the behavior of organic neurons. As a result, they share some performance characteristics with human neural biology. Layers of linked nodes or neurons that are arranged into input, hidden, and output layers are used in NNs to replicate the behavior of the human brain. The input layer receives data and sends them to the hidden layers for processing, while the output layer produces the final result. In order to increase the model’s accuracy, the hidden layers are in charge of observing patterns and correlations in the data and modifying the weights of the connections between neurons. One of the key features of neural networks is their ability to learn from data and store that knowledge in the form of learned parameters. This allows them to make predictions [14,15] and classifications [16,17] based on previously seen examples and to generalize that knowledge to new, unseen examples. Neural networks are also capable of identifying patterns [18] in data, even in the presence of noise [19] or other sources of variability. This makes them useful for a variety of applications, including natural language processing, image and speech recognition, and predictive modeling [20,21]. In addition, neural networks are capable of taking past experiences into consideration and using that information to make inferences and judgments about new situations. This is known as “contextual learning,” and it allows neural networks to adapt to changing conditions and make more accurate predictions over time. Overall, NNs are a powerful tool for processing and analyzing complex data, and their performance characteristics make them well-suited to a wide range of tasks in fields such as machine learning [22], artificial intelligence [23,24], and cognitive science [25]. There are different kinds of NNs, each with its own unique architecture and characteristics. Among the most common types, we can find feed-forward neural networks [26], recurrent neural networks (RNNs) [27], convolutional neural networks (CNNs) [28], auto-encoder neural networks [29], generative adversarial networks [30], etc. In this paper, we discuss feed-forward neural networks in detail.

1.2. A Brief Review of Feed-Forward Neural Networks and Their Uses

The connections between the nodes in feed-forward neural networks (FFNNs) do not cycle, and data only move from the input layer to the output layer in one direction. This makes FFNNs a relatively simple type of neural network compared to others such as RNNs and CNNs. Numerous applications have made use of FFNNs, such as natural language processing [10,11], image [7,8] and speech recognition [9], and financial forecasting [31]. They have been among the most successful learning algorithms and have been the basis for many other types of neural networks. However, FFNNs are a method for determining experts’ output, similar to techniques such as the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method [32] and the grey relational analysis (GRA) method [33], among others. FFNNs are machine learning models that learn to make predictions or classifications based on input data. They do not rely on explicit expert knowledge or rules but rather on patterns and relationships in the data themselves. TOPSIS and GRA, on the other hand, are decision-making methods used in multi-criteria decision analysis (MCDA) [34]. These methods involve comparing alternatives based on multiple criteria and weighing their importance to reach a decision. They do not involve machine learning or neural networks.

1.3. A Brief Review of Activation Function and Its Importance

Both the output layer and hidden layers of a FFNN can use activation functions [35]. Activation functions are essential in neural networks because they introduce nonlinearity, which allows the network to simulate complicated interactions between input and output variables. The rectified linear unit (ReLU) [36] and the family of sigmoid functions (including the logistic sigmoid function [37], hyperbolic tangent [38], and arctangent function [38]) are popular activation functions used in FFNNs. ReLU is widely used in the hidden layers because of its simplicity and effectiveness in training deep networks. It outputs zero for negative input and increases linearly for positive input. On the other hand, the logistic sigmoid function is commonly used in the output layer to transform the output of the network into a probability between 0 and 1. The logistic sigmoid function has a smooth and differentiable curve, which makes it easy to compute the gradient during back-propagation. The logistic sigmoid function’s output is constantly between 0 and 1, making it appropriate for binary classification tasks. However, it is susceptible to the vanishing gradient problem, particularly in deep neural networks [39], which can result in delayed training and poor performance. Overall, choosing the appropriate activation function for a neural network depends on the specific problem at hand and the structure of the network.
The main objectives of this research are:
  • There are numerous aggregation operators, such as Einstein, Yager, and Dombi. In this research, we combine the Yager [40,41,42] and Dombi [43] aggregation operators to make a new aggregation operator called the Yager–Dombi aggregation operator and explain its desirable properties;
  • We expand the concept of a feed-forward neural network to incorporate a feed-forward double-hierarchy linguistic neural network using Yager–Dombi operators;
  • We develop a fuzzy neural network for the selection of water purification methods using a double-hierarchy linguistic neural network and use it for the selection of water purification methods;
  • We extend the Yager–Dombi operations to aggregate a double hierarchy for fuzzy information.

1.4. Motivation behind the Study

According to the study analysis above, there has been no extensive usage of double-hierarchy linguistic term sets (DHLTSs) in the field of FFNNs or of the Yager–Dombi aggregation operator.
The primary objectives of this study are as follows:
  • To extend the concept of fuzzy neural networks to incorporate double-hierarchy linguistic neural networks;
  • To combine existing aggregation operators to create a new aggregation operator;
  • To develop a fuzzy neural network for the selection of water purification methods;
  • To extend the Yager–Dombi operations to aggregate double-hierarchy fuzzy information.

1.5. Contribution of the Study

In this paper, we combine two t-norms (Yager and Dombi t-norms) and apply them to linguistic neural networks, develop a new model of linguistic neural networks, and solve the selection problem of the water purification procedure. The contribution of this paper can be summarized as follows:
  • We develop new t-norms and their operations by using the Yager and Dombi t-norms and discuss their relationships;
  • The developed t-norms are further expanded to aggregation operators to develop a new set of double-hierarchy linguistic terms;
  • The proposed aggregation is necessary for artificial neural networks. Therefore, we integrate the proposed aggregation operators into the hidden layers of a linguistic neural network;
  • We develop a new approach to linguistic neural networks and linguistic decision models using linguistic neural networks;
  • The proposed linguistic decision model, based on a linguistic neural network, is applied to water-purification procedure-selection problems;
  • The proposed models are verified and compared with other models in Section 7 and Section 8 for validation.
The remaining sections of the paper are structured as follows: In Section 2, we discuss some fundamental definitions. In Section 3, we define the Yager–Dombi t-norm and t-conorm and discuss the DHLTYDWA, DHLTYDOWA, and DHLTYDHWA operators and their desirable properties. We also discuss the score function and the distance between any two DHLTSs. In Section 4, we discuss activation functions and feed-forward neural networks. In Section 5, we discuss the output of the feed-forward neural network using the Yager–Dombi operator. Section 6 illustrates the selection of the best method for water purification. In Section 7, we use the extended TOPSIS [44] and GRA [45] methods for verification. In Section 8, we compare the extended TOPSIS and GRA methods with our proposed method and rank the output using different Yager–Dombi aggregation operators. In Section 9, the authors conclude their study and elaborate on its value and development direction.

2. Fundamental Concept

This section discusses fuzzy sets, intuitionistic fuzzy sets, a hierarchy linguistic term set, and a double-hierarchy linguistic term set.
If Y is a non-empty set, then an object having the form (L.ZADEH [46])
Z = y , U Z y y Y }
is called a fuzzy set, and U Z y [ 0 , 1 ] denotes the degree of membership of Y in Z . An object having the form
Z = y , U Z y , V Z y y Y }
is called an intuitionistic fuzzy set, with U Z y [ 0 , 1 ] denoting the degree of membership and V Z y [ 0 , 1 ] representing the degree of non-membership of Y in Z . Additionally, the following condition is also satisfied:
U Z y + V Z y [ 0 , 1 ] ,
Fuzzy sets are a mathematical framework for representing and manipulating uncertainty and imprecision in information. In contrast to traditional sets, which define membership in a binary, all-or-nothing way, fuzzy sets allow for partial membership, meaning that an element can belong to a set to a certain degree. The usefulness of fuzzy sets lies in their ability to model and reason about real-world problems that involve uncertainty, ambiguity, and vagueness. For example, in many domains such as decision-making, control systems, and artificial intelligence, it is often difficult or impossible to make precise distinctions between different categories or values. Fuzzy sets provide a flexible and intuitive way to represent and reason about such situations. They allow for a more nuanced and realistic representation of uncertainty and imprecision, which can lead to better decision-making and more robust system design. Several terms that define the variable in a natural language manner, such as “low”, “medium”, and “high”, compose a fuzzy linguistic term set. The degree to which a variable belongs to a given term is described by a fuzzy set that is assigned to each term. Let S be a non-empty set. Then, the linguistic term set is defined as:
S = { s α | α ß , ß }
Suppose S = { s α | α ß , ß } is a FHLT (first-hierarchy linguistic term set) and O = { o Ƥ | Ƥ , } is a SHLTS (second-hierarchy linguistic term set), then the double-hierarchy linguistic term set (DHLTS) is defined as:
S 0 = S α O Ƥ α ß , ß , Ƥ , }
Gou et al. suggested two transformed functions between the DHLTs subscript and its numerical scale in order to deal with DHLTSs [47] more effectively. S 0 = S α O Ƥ is an unbroken DHLTS. The transformed functions f and f 1 between the numerical value and the subscript [ α , Ƥ ] of the DHLT S α O Ƥ are given below:
f : ß , ß × , 0 , 1 ,
= Ƥ + ( α + ß ) 2 ß = Ɏ
f 1 : 0 , 1 ß , ß × , ,
f 1 Ɏ = S [ 2 ß Ɏ Ɏ ] O { 2 ß Ɏ Ɏ 2 ß Ɏ Ɏ }
where [ 2 ß Ɏ Ɏ ] represents the integer part of the value 2 ß Ɏ Ɏ .

3. Yager–Dombi Operators for DHLTSs

This section gives the basic concepts of Yager–Dombi t-norm, Yager–Dombi t-conorm, Yager–Dombi operators, and Yager–Dombi aggregation operators for DHLTSs.
If G and H are any two numbers, then the Yager–Dombi t-norm and Yager–Dombi t-conorm are given as follows:
T G , H = 1 2 + 1 { 1 m i n ( 1 , 1 G t } m i n ( 1 , 1 G t ) k + 1 { 1 m i n ( 1 , 1 h t } m i n ( 1 , 1 h t ) k 1 t 1 k
T G , H = 1 1 1 + m i n ( 1 , ( G t + H t ) 1 m i n ( 1 , G t + H t ) k t + m i n ( 1 , ( G t + H t ) 1 m i n ( 1 , G t + H t ) k t 1 k ,
where k , t 1 and G , H [ 0 , 1 ] .
To define operation rules based on Yager–Dombi t-norms for this, let S 1 = F ( S α 1 O Ƥ 1 ) and S 2 = F ( S α 2 O Ƥ 2 ) be any two DHLTSs. Let k , t 1 , and > 0 be any real numbers. Then, we establish the Yager–Dombi operators for DHLTSs, which are given below:
  • S 1 S 2 = 1 1 1 + m i n ( 1 , F S α 1 O Ƥ 1 t ) 1 m i n ( 1 , F S α 1 O Ƥ 1 t ) k t + m i n ( 1 , F ( S α 2 O Ƥ 2 ) k ) 1 m i n ( 1 , F ( S α 2 O Ƥ 2 ) k ) k t 1 k
  • S 1 S 2 = 1 2 + 1 { 1 m i n ( 1 , 1 F ( S α 1 O Ƥ 1 ) t } m i n ( 1 , 1 F ( S α 1 O Ƥ 1 ) t ) k + 1 { 1 m i n ( 1 , 1 F ( S α 2 O Ƥ 2 ) t } m i n ( 1 , 1 F ( S α 2 O Ƥ 2 ) t ) k 1 t 1 k
  • S 1 = 1 1 1 + m i n ( 1 , F S α 1 O Ƥ 1 t ) 1 m i n ( 1 , F S α 1 O Ƥ 1 t ) k t 1 k
  • ( S 1 ) = 1 2 + 1 { 1 m i n ( 1 , 1 G t ) } m i n ( 1 , 1 G t ) k 1 t 1 k
If S 1 = F ( S α 1 O Ƥ 1 ) and S 2 = F ( S α 2 O Ƥ 2 ) are any two DLTEs, then the distance between S 1 = F ( S α 1 O Ƥ 1 ) and S 2 = F ( S α 2 O Ƥ 2 ) is defined as:
d S 1 , S 2 = F S α 1 O Ƥ 1 F ( S α 2 O Ƥ 2 )
where d S 1 , S 2 [ 0 , 1 ] .
Suppose S i = F S α i O Ƥ i i = 1 , 2 , 3 n } is a group of DHLTSs, then the double-hierarchy linguistic term Yager–Dombi weighted averaging (DHLTYDWA) operator is a mapping Q n Q such that:
D H L T Y D W A S 1 , S 2 , S 3 , S 4 , . S i = i = 1 n W i S i
W i = W 1 , W 2 , W 3 , W 4 W i l is the weight vector of S i = F S α i O Ƥ i i = 1 , 2 , 3 n } that fulfills the condition that i = 1 n W i = 1 .
In the following theorem, we prove that the aggregated values obtained from different DHLTSs by using the proposed aggregation operators are again DHLTSs. This means that the aggregation operators are valid. Additionally, we provide validation for the proposed aggregation operators using Yager–Dombi t-norms.
Theorem 1.
Suppose  S i = F S α i O Ƥ i i = 1 , 2 , 3 n }  is a group of DHLTSs, then the aggregated value of DHLTSs using the DHLTYDWA operator is also a DHLTS, that is:
D H L T Y D W A S 1 , S 2 , S 3 , S 4 , . S i = i = 1 n W i S i = 1 1 1 + i = 1 n W i m i n ( 1 , F S α i O Ƥ i t ) 1 m i n ( 1 , F S α i O Ƥ i t ) k t 1 k
W i = W 1 , W 2 , W 3 , W 4 W i l  is the weight vector of  S i = F S α i O Ƥ i i = 1 , 2 , 3 n }  that fulfills the condition that  i = 1 n W i = 1 .
Proof. 
To prove this theorem, we employ the mathematical induction approach.
For n = 1 , we have:
D H L T Y D W A S 1 = W 1 S 1 = 1 1 1 + W 1 m i n ( 1 , F S α 1 O Ƥ 1 t ) 1 m i n ( 1 , F S α 1 O Ƥ 1 t ) k t 1 k
As a result, the statement holds true for n = 1.
Assume that the above result holds true for n = h .
D H L T Y D W A S 1 , S 2 , S 3 , . S h = i = 1 h W i S i = 1 1 1 + i = 1 h W i m i n ( 1 , F S α i O Ƥ i t ) 1 m i n ( 1 , F S α i O Ƥ i t ) k t 1 k
Next, we are going to show that the result is true for n = h + 1 :
D H L T Y D W A S 1 , S 2 , S 3 , S 4 , . S h + 1 = i = 1 h + 1 W i S i = 1 1 1 + i = 1 h W i min 1 , F S α i O Ƥ i t 1 min 1 , F S α i O Ƥ i t k t 1 k 1 1 1 + W h + 1 m i n ( 1 , F S α i O Ƥ i t ) 1 m i n ( 1 , F S α i O Ƥ i t ) k t 1 k = 1 1 1 + i = 1 h + 1 W i m i n ( 1 , F S α i O Ƥ i t ) 1 m i n ( 1 , F S α i O Ƥ i t ) k t 1 k
The result is valid for n = h + 1 . Moreover, the DHLTYDWA operator can easily hold its idempotence, boundedness, and monotonicity properties.
Suppose S i = F S α i O Ƥ i i = 1 , 2 , 3 n } is a group of DHLTSs, then the double-hierarchy linguistic term Yager–Dombi order weighted averaging (DHLTYDOWA) operator is a mapping Q n Q such that:
D H L T Y D O W A S 1 , S 2 , S 3 S i = i = 1 n W i S R ( i )
W i = W 1 , W 2 , W 3 , W 4 W i l is the weight vector of S i = F S α i O Ƥ i i = 1 , 2 , 3 n } that fulfills the condition that i = 1 n W i = 1 ; R 1 , R 2 , R 3 R ( n ) are the permutations of i = 1 , 2 , 3 n for which S R ( i 1 ) S R ( i ) for all i = 1 , 2 , 3 n .
In the following theorem, we prove that the aggregated values obtained from different DHLTSs by using the proposed ordered weighted aggregation operators are again DHLTSs. This means that the aggregation operators are valid. Additionally, we provide validation for the proposed ordered weighted aggregation operators using Yager–Dombi t-norms. □
Theorem 2.
Suppose  S i = F S α i O Ƥ i i = 1 , 2 , 3 n }  is a group of DHLTSs, then the aggregated value of DHLTSs using the DHLTYDOWA operator is also a DHLTS, that is:
D H L T Y D O W A S 1 , S 2 , S i = i = 1 n W i S R ( i ) = 1 1 1 + i = 1 n W i m i n ( 1 , F S α i O Ƥ i t ) 1 m i n ( 1 , F S α i O Ƥ i t ) k t 1 k
Proof. 
The proof of this result is similar to the proof of Theorem 1. Moreover, the DHLTYDWA operator can easily hold its idempotence, boundedness, and monotonicity properties. Suppose S i h = F S α i O Ƥ i i = 1 , 2 , 3 n } is a collection of DHLTSs, then the double-hierarchy linguistic term Yager–Dombi hybrid weighted averaging (DHLTYDHWA) operator is a mapping Q n Q such that:
D H L T Y D H W A S 1 h , S 2 h , S 3 h , S 3 h , . S i h = i = 1 n W i S i h
W i = W 1 , W 2 , W 3 , W 4 W i l is the weight vector of S i o = F S α i O Ƥ i i = 1 , 2 , 3 n } that fulfills the condition that i = 1 n W i = 1 . □
In the following theorem, we prove that the aggregated values obtained from different DHLTSs by using the proposed hybrid weighted aggregation operators are again DHLTSs. This means that the aggregation operators are valid. Additionally, we provide validation for the proposed ordered weighted aggregation operators using Yager–Dombi t-norms.
Theorem 3.
Suppose  S i h = F S α i O Ƥ i i = 1 , 2 , 3 n } is a group of DHLTSs, then the aggregated value of DHLTSs using the DHLTYDHWA operator is also a DHLTS, that is:
D H L T Y D H W A S 1 h , S 2 h , . S i h = i = 1 n W i S i h = 1 1 1 + i = 1 n W i m i n ( 1 , F S α i O Ƥ i t ) 1 m i n ( 1 , F S α i O Ƥ i t ) k t 1 k
Proof. 
The proof of this result is similar to the proof of Theorem 1. Moreover, the DHLTYDWA operator can easily hold its idempotent, bounded, and monotonicity properties. □

4. Activation Functions and Neural Network Systems

An activation function [48] is a function that gives the output of a node. Additionally, it is known as the “transfer function.” It is used to determine if the output of a neural network is a yes or no response. Depending on the function, it can transfer the output values to a range between 0 and 1 or between −1 and 1, among others. An activation function can be either linear or nonlinear. The terms monotonic function and derivative are essential for understanding nonlinear functions. The range or curves of the nonlinear activation functions are the major factors used to categorize them. The sigmoid or logistic activation function that we use is defined as follows:
f x = Y = e x e x + 1
where x = 1 ± 2 ± 3 ± 4 ψ .
The sigmoid activation function, which successfully accomplishes its duty and has a range of 0 to 1, is commonly employed in decision-making because it is effective at performing what it is supposed to perform. Since it has the lowest range and generates the most precise predictions, we therefore employ this activation function anytime we need to determine an outcome. The function could take many different shapes. As a consequence, we are able to determine the slope of the sigmoid curve between any two positions. Although the derivative of the function is not monotonic, the function itself is. The logistic sigmoid function has the effect of making it possible for a neural network to encounter an impasse during training. Neural networks were first developed in the 1950s as a means of addressing this issue. Programming everyday computers to function similarly to a network of brain cells is the process used to create ANNs. Artificial neural networks employ a complex mathematical algorithm to make sense of the data they are fed. A typical artificial neural network is made up of hundreds to millions of units, commonly referred to as artificial neurons, arranged in an order of layers. The input layer receives a variety of outside data types. The network’s goal is to process or comprehend these data. After leaving the input layer, the data go through one or more hidden units. The hidden unit is in charge of transforming the incoming data into a format that the output unit can use. Most neural networks have complete interconnections between layers. These connections, similarly to the human brain, are weighted; the higher the number, the greater the impact one unit has on another. Every unit in the network gains knowledge as the data pass through it. The output units, which are placed on the other side of the network, provide the network with the data that have been received and processed. ANNs come in different types, but in this paper, we only discuss FFNNs in detail.
A feed-forward neural network (FFNN) is one in which there is no cycling of the connections between the nodes. The opposite of a FFNN is a feed-backward neural network (FBNN), which cycles through certain routes. The feed-forward model is the most fundamental type of neural network since incoming data are only ever processed in one way. In a feed-backward neural network, some of the input data come back to the input layer from hidden layers. However, regardless of the number of hidden nodes the data may pass through, they never flow backward and always move ahead. The input layer, hidden layer, and output layer are only a few of the layers that make up this collection of fundamental processing units. Each unit in the layer below it is linked to every other unit in the layer above it. Because they are not all made equally, each of these connections may have a different weight or strength. A feed-forward neural network is a function such that:
β = f j = 1 n W j a i j
where the activation function is denoted by f ; a i j is the input signal; W j is the criteria weight; and β is the single output. The output of feed-forward neural network is shown in Figure 1.
Figure 2 shows that the Artificial neural networks and fuzzy logic are both used in neuro-fuzzy systems, a sort of hybrid intelligent system that is effective at processing complicated data. The following figure depicts a fuzzy neuron, a processing element of a hybrid neural net:
Figure 2. Simple neuro-fuzzy system.
Figure 2. Simple neuro-fuzzy system.
Mathematics 11 02972 g002
where a 1 , a 2 are the input signals and w 1 , w 2 are the corresponding weights of the input signals, respectively. Ʌ is the AND fuzzy neuron.

5. The Output of Neural Networks Using Yager–Dombi Operators

In this section, we put forward a new extension to the fuzzy neural system to develop a fuzzy neural system that is based on DHLTSs. For this, let P j = { p 1 , p 2 , p 3 , p 4 p k } be the arrangement of attributes that will be evaluated, and let Q i = { q 1 , q 2 , q 3 , q 4 q z } be the discrete set of z alternatives that must be selected. Suppose W j = W 1 , W 2 , W 3 , W 4 W k l is the attribute weight vector, where W j { j = 1 , 2 , 3 , 4 k } are all real numbers such that W j > 0 and j = 1 z W j = 1 . Let there be n number of experts, D M r { r = 1 , 2 , 3 , 4 n } , to express their views on the z alternatives in reference to the k criterion in the context of DHLTSs. The data given by the nth expert in the form matrix appear as follows:
g r = a 11 a 12 a 1 k a 21 a 22 a 2 k a z 1 a z 2 a z k
To determine the output of the feed-forward fuzzy neural system, we go through different phases. In phase 1, we determine the criteria weight vector of each matrix provided by the experts corresponding to each input information or signal. In phase 2, we find the hidden layer using Equations (18) and (19). In phase 3, the activation function is used to calculate the output of the input data.
To determine the feed-forward fuzzy neural system’s output in a DHLTS environment, we suggest the following algorithm:
Phase 1: The following are the steps in Phase 1:
Step 1: Recognize the expert data or information presented in the form of a matrix g r = { A i j } z × k , where j = ( 1 , 2 , 3 , 4 k ) and i = ( 1 , 2 , 3 , 4 z ) represent the jth criterion value in relation to the ith alternative, respectively;
Step 2: Determine the criteria weight vector of each matrix provided by the experts corresponding to each input information or signal using the entropy measure method, which consists of the following steps:
  • To evaluate the entropy of the information provided by the experts in the form of matrices, we use the following equation:
n j = 1 i i = 1 z 1 j j = 1 k 2 c o s π F S α i O Ƥ i F S α i O Ƥ i c 4 1 × 1 2 1
where j = 1 , 2 , 3 , , k and i = 1 , 2 , 3 , , z denote the jth alternative and ith criterium, respectively, and F S α i O Ƥ i denotes the DHLTS, F S α i O Ƥ i c is the complement of the given DHLTEs, and is determined as:
F S α i O Ƥ i c = F S α i O Ƥ i
2.
We obtain the final criteria weight vector as follows:
W j = n j j = 1 k n j
Phase 2: The steps in Phase 2 are listed below:
Step 1: Find the scalar product of the weight vector and input signal using the Yager–Dombi aggregation operator as follows:
M = r = 1 n W j a j ,
where W j = W 1 , W 2 , W 3 , W k and r = ( 1 , 2 , 3 , 4 n ) denote the weight vector and the experts, respectively. a j = ( a 1 , a 2 , a 3 , , a j ) denotes the input signals;
Step 2: Take the minimum weight and multiply it with M using the Yager–Dombi averaging operator.
W j = m i n { g 1 w j , g 2 w j , g 3 w j , , g n w j
Phase 3: The steps taken in Phase 3 are as follows:
Step 1: Apply the logistic activation function to the outcome of Step 2 in Phase 2 to obtain the final output β . The logistic activation function is described as follows:
f x = Y = e x e x + 1
Step 2: Rank the possible outcomes for each β in descending order.

Output of Feed-Forward Double-Hierarchy Linguistic Term Neural Networks

An artificial neural network of this sort, known as a feed-forward double-hierarchy linguistic term neural network (FFDHLTNN), only allows information to travel in one direction: from the input layer using one or more hidden layers to the output layer. It is called feed-forward because data move forward through the network without looping back on themselves. In a typical FFDHLTNN architecture, each layer consists of a set of neurons or nodes that perform a simple mathematical operation on the inputs and output the result to the next layer. The network’s ultimate output is produced by the output layer once the input layer has received the input data. As demonstrated in Figure 3, the hidden layers execute intricate modifications on the input data to provide characteristics that are beneficial for the output layer.
Step 1: The decision-making matrix given by the decision-makers in the LTS environment must be identified, and all evaluations must be based on the LTS that the DM has provided:
S o = S α O Ƥ S 4 = e x t r e m e l y   l o w , S 3 = v e r y   l o w , S 2 = l o w , S 1 = s l i g h t l y   l o w , S 0 = m e d i u m S 1 = s l i g h t l y   h i g h , S 2 = h i g h , S 1 = v e r y   h i g h , S 3 = e x t r e m e l y   h i g h O 4 = f a r   f r o m , O 4 = s c a r c e l y , O 4 = o n l y   a   l i t t l e , O 4 = a   l i t t l e O 4 = j u s t   r i g h t O 1 = m u c h , O 2 = v e r y   m u c h , O 3 = e x r e m e l y   m u c h , O 4 = e n t i e r l y
Step 2: Find the criteria weight for each matrix provided by the experts in the environment of the DHLTEs using the entropy measure method;
Step 3: Find the scalar product of input information or signals and their corresponding weights, and add the scalar product of the input signals using Equations (18) and (19);
Step 4: Apply the logistic activation function to the outcome of Step 2 in Phase 2 to obtain the final output β;
Step 5: Rank the possible outcomes for each β in descending order.

6. Numerical Example

Water is a clear, flavorless, and odorless liquid that is necessary for all life on Earth. It is a chemical substance with the chemical formula H2O, consisting of two hydrogen atoms and one oxygen atom. The three states of matter that water may exist in are solid (ice), liquid (water), and gas (water vapors). It is the only substance on Earth that can naturally exist in all three states at normal temperatures and pressures. Water plays a crucial role in many aspects of our lives, including hydration, agriculture, industry, transportation, and recreation. However, due to the growth of industries, access to clean and safe drinking water has become a serious concern for many people throughout the world. Water pollution causes various diseases, some of which lead to death. As a result, supplying clean water to domestic areas is the most crucial responsibility of governments. In this section, we will look at different commercial-scale water purification systems and the factors that influence them.
The following techniques are used to purify water on a commercial scale:
(1)
q 1 : Chlorination: Chlorination is a process that uses chlorine to disinfect water. Chlorine is added to water, which kills the harmful bacteria and viruses present in it. This method is effective in killing most of the disease-causing pathogens;
(2)
q 2 : Reverse Osmosis: In the reverse osmosis (RO) process, a semi-permeable membrane is utilized to filter out dissolved particles, contaminants, and minerals from water. It is a highly effective method of water purification and is commonly used in households and industries;
(3)
q 3 : Ultraviolet Purification: UV purification uses ultraviolet light to kill bacteria, viruses, and other microorganisms present in water. It is an effective method of water purification and does not use any chemicals;
(4)
q 4 : Filtration: Filtration is a process that removes impurities from water by passing it through a porous material. Sediments, dirt, and other bigger particles can be effectively removed from water with this technique;
(5)
q 5 : Coagulation and sedimentation: Chemicals such as alum are added to water to cause impurities to clump together and settle at the bottom of a tank, which can then be removed through sedimentation;
(6)
q 6 : Boiling: By bringing water to a boil for at least one minute, the majority of disease-causing organisms can be killed;
(7)
q 7 : Distillation: Water is heated during distillation, and the steam is subsequently condensed back into water. Minerals, chemicals, and bacteria are just a few of the impurities that can be removed by using this method.
The factors that affect the methods of water purification are:
  • p 1 : Economic factors: Economic factors can have a significant impact on water purification, as the process of treating and purifying water can be expensive and require significant investments in infrastructure, technology, and human resources. One major economic factor that can affect water purification is the availability and cost of resources such as energy, chemicals, and materials needed for the purification process;
  • p 4 : Socio-political factors: The socio-political environment can have a significant impact on water purification. Governments have an obligation to make sure that their populations have access to safe drinking water since access to clean water is a fundamental human right. However, the provision of clean water can be influenced by a variety of socio-political factors, such as social factors, public health, and political instability, among others;
  • p 3 : Environmental factors: There are many environmental factors that can affect water purification, including temperature, chemicals, turbidity, and climate change. Overall, environmental factors can have a significant impact on water purification and must be taken into account when designing and implementing water purification systems;
  • p 5 : Type of Contaminants: The type and concentration of contaminants present in the water will also affect the purification process. Different treatment processes are better suited for removing different types of contaminants, such as chemicals, microbes, or sediments;
  • p 5 : Water Quality Standards: The level of purity required for the final product will affect the purification process. Different industries and applications have different standards for water quality, which will influence the choice of treatment process and the extent of purification required.
For this, let q 1 , q 2 , q 3 , q 4 , q 5 , q 6 , q 7 represent the set of alternatives for commercial-scale water purification. Let ( p 1 , p 2 , p 3 , p 4 , p 5 ) be the five variables influencing these procedures. The network’s ultimate output is produced by the output layer once the input layer has received the input data. Between the input and output layers, the hidden layers carry out complicated modifications to the data to provide characteristics that are beneficial to the output layer. The linguistic information is considered input signals, interacting with weight vectors in the input layer and using the Yager–Dombi t-norms to produce the product of input signals as a linguistic variable. Then, the aggregated information of input signals is calculated using Yager–Dombi t-conorms. The hidden layer information, after performing the Yager–Dombi t-norms and t-conorms, is given in Table 1. Apply the activation function to the hidden layer signals and determine the output layer signals of the linguistic feed-forward neural network. The output layer information is given in Table 2.
According to experts, the best solution is reverse osmosis, whereas filtration is effective for basic water purification tasks, including chlorine and sediment removal. Reverse osmosis removes contaminants across a wider range. All pathogens in the water are eliminated by alternative methods. However, the dead bacteria still float in the water. On the other hand, an RO water purifier eliminates bacteria by filtering out their floating, dead corpses after they have been killed. As a result, RO-purified water is cleaner.

7. Verification of Our Proposed Method

In this section, we consider the extended TOPSIS approach and the GRA method to verify the effectiveness and validity of our proposed approach. The extended TOPSIS and GRA methods are used for verification in decision-making because they provide a more comprehensive and objective analysis of multiple criteria and are effective in handling uncertainty, imprecision, and complex decision-making scenarios. For this verification, we consider the information given by the experts in the form of three matrices in the environment of DHLTSs. The matrices provided by the experts consist of five attributes related to seven alternatives. We use the Yager–Dombi aggregation operator to aggregate the information given by the experts in the form of matrices in the environment of DHLTSs. Then, we apply the entropy approach to determine the criteria weight vector of the aggregated matrices. Finally, we apply the extended TOPSIS method to obtain the required output, which is given in Table 3.
Similarly, we use the Yager–Dombi aggregation operator to aggregate the experts’ information and then apply the entropy measure method to determine the criteria weight vector. Finally, we apply the GRA method to obtain the required output, as shown in Table 4.
From Table 2, Table 3 and Table 4, we can see that the results obtained by using both methods (the extended TOPSIS method and the GRA method) are almost the same as the results obtained by using our proposed method. This ensures that our proposed method is valid. Reverse osmosis is therefore the ideal option for experts to choose, while filtering is appropriate for simple water jobs such as chlorine and sediment removal. Reverse osmosis removes contaminants across a wider range. All pathogens in the water are eliminated by alternative methods. However, the dead bacteria still float in the water. An RO water purifier, on the other hand, eliminates germs by killing them and filtering away their floating, lifeless bodies. Therefore, RO-purified water is cleaner.

8. Discussion and Comparison

In this section, we compare our suggested approach with the GRA method. For this, let P j = { p 1 , p 2 , p 3 , p 4 p k } be the arrangement of attributes that will be evaluated, and let Q i = { q 1 , q 2 , q 3 , q 4 q z } be the discrete set of z alternatives that must be selected. Suppose W j = W 1 , W 2 , W 3 , W 4 W k l is the attribute weight vector, where W j { j = 1 , 2 , 3 , 4 k } are all real numbers satisfying W j > 0 and j = 1 z W j = 1 . Let there be an n number of experts, D M r { r = 1 , 2 , 3 , 4 n } , expressing their views on the z alternatives with respect to the k criterion in the context of DHLTSs. Here, the experts gave data in the form of three matrices in the environment of DHLTSs. The matrices provided by the experts consist of five attributes related to seven alternatives. First, we find the expert weight vector using the entropy measure method. We use the Yager–Dombi aggregation operator to aggregate the information provided by the experts in the form of matrices in the environment of DHLTSs and then apply the entropy measure method to calculate the criteria weight vector of the aggregated matrix, which is W j = 0.214625973 , 0.201184071 , 0.167857403 , 0.204584654 , 0.211747899 l . Next, we determine the aggregated matrix’s PIS and NIS. Finally, we apply the GRA method to determine the output and rank the possible outcomes of each output, as shown in Table 4. For this comparison, we will also use the extended TOPSIS approach. Both the extended TOPSIS and TOPSIS methods are MCDM techniques that seek to isolate the best alternative from a group of alternatives based on several criteria. The number of criteria that are taken into account is the fundamental distinction between TOPSIS and extended TOPSIS. Extended TOPSIS can accommodate many sets of criteria, whereas TOPSIS only takes into account one set of criteria. Similarly, in the extended TOPSIS method, we find the expert weight vector using the entropy measure method. We use the Yager–Dombi aggregation operator to aggregate the information provided by the experts in the form of matrices in the environment of DHLTSs and then apply the entropy measure method to determine the criteria weight vector of the aggregated matrix. Next, we find the positive and negative ideal solutions of the aggregated matrix. Then, the expert information is converted to one set of criteria, and we finally apply the TOPSIS method to determine the output and rank the possible outcomes of each output, as shown in Table 3. The graphical representation of the comparison between the proposed method and other similar methods can be observed in Figure 4. The comparison of detail information is given in Table 5.
In our suggested approach, we take the experts’ information in the environment of DHLTSs and find the expert weight using the entropy measure method. After that, we use the Yager–Dombi aggregation operator to find the hidden layer from the experts’ matrices, apply the activation function to generate the output of our proposed method, and finally rank the output. The comparison between the GRA method, the extended TOPSIS method, and our proposed method is shown in Figure 5. Both methods yield the same results as our proposed method. Filtration is useful for simple water tasks such as chlorine and sediment removal, but reverse osmosis is the best choice according to experts. A wider range of contaminants may be removed via reverse osmosis. Other techniques eliminate all germs in the water. However, the dead microorganisms still float in the water. An RO water purifier, on the other hand, filters out the dead bacteria that are floating in the water and eliminates them. It follows that RO-purified water is more sanitary. As a result, we conclude that our suggested technique is valid and superior to both methods (the extended TOPSIS method and the GRA method), since it produces the output in fewer steps, saving experts’ time.

9. Conclusions

The neural network is a very important topic in machine learning, and artificial neural networks have garnered significant attention in the decision-making process. Linguistic information is a useful tool for describing uncertainty in information science and decision-making. Therefore, this paper applies the linguistic term set to artificial neural networks and develops a decision-making model based on linguistic neural networks. First, using the concepts of Yager t-norms and Dombi t-norms, we define a new hybrid t-norm known as the Yager–Dombi t-norm. We further develop some operational rules for double-hierarchy linguistic term sets and generalize them to incorporate more than two double-hierarchy linguistic term sets. The Yager–Dombi aggregation operator has been developed for double-hierarchy linguistic terms. We discuss the desirable properties of the DHLTYDWA operator, the DHLTYDOWA operator, and the DHLTYDHWA operator. Furthermore, we extend the concept of fuzzy neural network systems to feed-forward double-hierarchy linguistic neural network systems. Experts provide information in the environment of DHLTSs, and to determine the expert weight, we use the entropy measure method. We then obtain the hidden layer data using the DHLTYDWA operator and apply FFNNs to the hidden layer to derive the output of the information provided by the experts. In addition, a real-life MADM problem has been formulated. Filtration is beneficial for simple water tasks such as chlorine and sediment removal, but reverse osmosis is the best choice, according to experts, because a wider range of contaminants can be removed with this method. Other techniques eliminate all germs in the water. However, the dead microorganisms still float in it. An RO water purifier, on the other hand, filters out the dead bacteria that are floating in the water and eliminates them. It follows that RO-purified water is more sanitary. Moreover, we use the extended TOPSIS approach and the GRA approach for the verification of our proposed method, and both methods yield almost the same results as our proposed method. A comparison analysis has been carried out, as seen in Figure 4, to demonstrate the validity and viability of our suggested method in comparison to other existing methods. We affirm that our proposed method is significantly better than other existing methods because it produces expert information output in a shorter time.
In future work, we will apply our proposed technique to intuitionistic fuzzy sets for decision-making problems and also use it in TWDs, Pythagorean fuzzy sets.

Author Contributions

Methodology, S.A. and N.A.; Formal analysis, A.O.A.; Data curation, A.O.A.; Writing—original draft, S.A. and N.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was funded by Institutional Fund Projects under grant no. IFPIP: 414-611-1443. The authors gratefully acknowledge the technical and financial support provided by the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Razi, M.A.; Athappilly, K. A comparative predictive analysis of neural networks (NNs), nonlinear regression and classification and regression tree (CART) models. Expert Syst. Appl. 2005, 29, 65–74. [Google Scholar] [CrossRef]
  2. Coats, P.K.; Fant, L.F. Recognizing financial distress patterns using a neural network tool. Financ. Manag. 1993, 1993, 142–155. [Google Scholar] [CrossRef]
  3. Ceylan, H.; Bayrak, M.B.; Gopalakrishnan, K. Neural Networks Applications in Pavement Engineering: A Recent Survey. Int. J. Pavement Res. Technol. 2014, 7, 434–444. [Google Scholar]
  4. Sarvamangala, D.R.; Kulkarni, R.V. Convolutional neural networks in medical image understanding: A survey. Evol. Intell. 2022, 15, 1–22. [Google Scholar] [CrossRef]
  5. Fang, C.; Dong, H.; Zhang, T. Mathematical models of overparameterized neural networks. Proc. IEEE 2021, 109, 683–703. [Google Scholar] [CrossRef]
  6. Mitchell, J.M.O. Classical statistical methods. Mach. Learn. Neural Stat. Classif. 1994, 1994, 17–28. [Google Scholar]
  7. Koh, J.; Lee, J.; Yoon, S. Single-image deblurring with neural networks: A comparative survey. Comput. Vis. Image Underst. 2021, 203, 103134. [Google Scholar] [CrossRef]
  8. Alshehri, S.A. Neural network technique for image compression. IET Image Process. 2016, 10, 222–226. [Google Scholar] [CrossRef]
  9. Yen, Y.; Fanty, M.; Cole, R. Speech recognition using neural networks with forward-backward probability generated targets. In Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, Munich, Germany, 21–24 April 1997; Volume 4, pp. 3241–3244. [Google Scholar]
  10. Collobert, R.; Weston, J. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; pp. 160–167. [Google Scholar]
  11. Ma, Q. Natural language processing with neural networks. In Proceedings of the Language Engineering Conference, Hyderabad, India, 13–15 December 2002; pp. 45–56. [Google Scholar]
  12. Rani, S.; Kumar, P. Deep learning based sentiment analysis using convolution neural network. Arab. J. Sci. Eng. 2019, 44, 3305–3314. [Google Scholar] [CrossRef]
  13. Chen, P.; Sun, Z.; Bing, L.; Yang, W. Recurrent attention network on memory for aspect sentiment analysis. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Denmark, 7–11 September 2017; pp. 452–461. [Google Scholar]
  14. Draper, N.R.; Smith, H. Applied Regression Analysis; John Wiley & Sons: Hoboken, NJ, USA, 1998; Volume 326. [Google Scholar]
  15. Yoo, P.D.; Kim, M.H.; Jan, T. Machine learning techniques and use of event information for stock market prediction: A survey and evaluation. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control and Automation and International Conference on Intelligent Agents, Web Technologies and Internet Commerce (CIMCA-IAWTIC’06), Sydney, Australia, 29 November–1 December 2006; Volume 2, pp. 835–841. [Google Scholar]
  16. Adya, M.; Collopy, F. How effective are neural networks at forecasting and prediction? A review and evaluation. J. Forecast. 1998, 17, 481–495. [Google Scholar] [CrossRef]
  17. Zaghloul, W.; Lee, S.M.; Trimi, S. Text classification: Neural networks vs support vector machines. Ind. Manag. Data Syst. 2009, 109, 708–717. [Google Scholar] [CrossRef]
  18. Naseer, M.; Minhas, M.F.; Khalid, F.; Hanif, M.A.; Hasan, O.; Shafique, M. Fannet: Formal analysis of noise tolerance, training bias and input sensitivity in neural networks. In Proceedings of the 2020 Design, Automation & Test in Europe Conference & Exhibition (DATE), Grenoble, France, 9–13 March 2020; pp. 666–669. [Google Scholar]
  19. Shin, K.S.; Lee, T.S.; Kim, H.J. An application of support vector machines in bankruptcy prediction model. Expert Syst. Appl. 2005, 28, 127–135. [Google Scholar] [CrossRef]
  20. Sterling, A.J.; Zavitsanou, S.; Ford, J.; Duarte, F. Selectivity in organocatalysis—From qualitative to quantitative predictive models. Wiley Interdiscip. Rev. Comput. Mol. Sci. 2021, 11, e1518. [Google Scholar] [CrossRef]
  21. Liu, Y.; Liu, S.; Wang, Y.; Lombardi, F.; Han, J. A survey of stochastic computing neural networks for machine learning applications. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 2809–2824. [Google Scholar] [CrossRef]
  22. Schwendicke, F.A.; Samek, W.; Krois, J. Artificial intelligence in dentistry: Chances and challenges. J. Dent. Res. 2020, 99, 769–774. [Google Scholar] [CrossRef]
  23. Ossowska, A.; Kusiak, A.; Świetlik, D. Artificial intelligence in dentistry—Narrative review. Int. J. Environ. Res. Public Health 2022, 19, 3449. [Google Scholar] [CrossRef]
  24. Drakopoulos, G.; Giannoukou, I.; Mylonas, P.; Sioutas, S. The converging triangle of cultural content, cognitive science, and behavioral economics. In Artificial Intelligence Applications and Innovations, Proceedings of theAIAI 2020 IFIP WG 12.5 International Workshops: MHDW 2020 and 5G-PINE 2020, Neos Marmaras, Greece, 5–7 June 2020, Proceedings 16; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 200–212. [Google Scholar]
  25. Bebis, G.; Georgiopoulos, M. Feed-forward neural networks. IEEE Potentials 1994, 13, 27–31. [Google Scholar] [CrossRef]
  26. Medsker, L.R.; Jain, L.C. Recurrent neural networks. Des. Appl. 2001, 5, 64–67. [Google Scholar]
  27. Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef] [Green Version]
  28. Lange, S.; Riedmiller, M. Deep auto-encoder neural networks in reinforcement learning. In Proceedings of the 2010 International Joint Conference on Neural Networks (IJCNN), Barcelona, Spain, 18–23 July 2010; pp. 1–8. [Google Scholar]
  29. Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative adversarial networks: An overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef] [Green Version]
  30. Chao, J.; Shen, F.; Zhao, J. Forecasting exchange rate with deep belief networks. In Proceedings of the 2011 International Joint Conference on Neural Networks, San Jose, CA, USA, 31 July–5 August 2011; pp. 1259–1266. [Google Scholar]
  31. Zavadskas, E.K.; Mardani, A.; Turskis, Z.; Jusoh, A.; Nor, K.M. Development of TOPSIS method to solve complicated decision-making problems—An overview on developments from 2000 to 2015. Int. J. Inf. Technol. Decis. Mak. 2016, 15, 645–682. [Google Scholar] [CrossRef]
  32. Wei, G.W. GRA method for multiple attribute decision making with incomplete weight information in intuitionistic fuzzy setting. Knowl.-Based Syst. 2010, 23, 243–247. [Google Scholar] [CrossRef]
  33. Guitouni, A.; Martel, J.M. Tentative guidelines to help choosing an appropriate MCDA method. Eur. J. Oper. Res. 1998, 109, 501–521. [Google Scholar] [CrossRef]
  34. Elliott, D.L. A Better Activation Function for Artificial Neural Networks; Institute for Systems Research, Harvard University: College Park, MA, USA, 1993. [Google Scholar]
  35. Schmidt-Hieber, J. Nonparametric Regression Using Deep Neural Networks with ReLU Activation Function. 2020. Available online: https://arxiv.org/abs/1708.06633 (accessed on 30 May 2023).
  36. Yin, X.; Goudriaan, J.A.N.; Lantinga, E.A.; Vos, J.A.N.; Spiertz, H.J. A flexible sigmoid function of determinate growth. Ann. Bot. 2003, 91, 361–371. [Google Scholar] [CrossRef] [PubMed]
  37. Zamanlooy, B.; Mirhassani, M. Efficient VLSI implementation of neural networks with hyperbolic tangent activation function. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2013, 22, 39–48. [Google Scholar] [CrossRef]
  38. Kamruzzaman, J. Arctangent activation function to accelerate backpropagation learning. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 2002, 85, 2373–2376. [Google Scholar]
  39. Montavon, G.; Samek, W.; Müller, K.R. Methods for interpreting and understanding deep neural networks. Digit. Signal Process. 2018, 73, 1–15. [Google Scholar] [CrossRef]
  40. Sideris, A.; Orita, K. Structured learning in feedforward neural networks with application to robot trajectory control. In Proceedings of the 1991 IEEE International Joint Conference on Neural Networks, Singapore, 18–21 November 1991; pp. 1067–1072. [Google Scholar]
  41. Sharma, S.; Mehra, R. Implications of pooling strategies in convolutional neural networks: A deep insight. Found. Comput. Decis. Sci. 2019, 44, 303–330. [Google Scholar] [CrossRef] [Green Version]
  42. Garg, H.; Shahzadi, G.; Akram, M. Decision-making analysis based on Fermatean fuzzy Yager aggregation operators with application in COVID-19 testing facility. Math. Probl. Eng. 2020, 2020, 7279027. [Google Scholar] [CrossRef]
  43. Akram, M.; Khan, A.; Borumand Saeid, A. Complex Pythagorean Dombi fuzzy operators using aggregation operators and their decision-making. Expert Syst. 2021, 38, e12626. [Google Scholar] [CrossRef]
  44. Ye, F. An extended TOPSIS method with interval-valued intuitionistic fuzzy numbers for virtual enterprise partner selection. Expert Syst. Appl. 2010, 37, 7050–7055. [Google Scholar] [CrossRef]
  45. Zadeh, L.A. Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets Syst. 1978, 1, 3–28. [Google Scholar] [CrossRef]
  46. Atanassov, K. Intuitionistic fuzzy sets. Int. J. Bioautomation 2016, 20, 1. [Google Scholar]
  47. Li, X.; Xu, Z.; Wang, H. Three-way decisions based on some Hamacher aggregation operators under double hierarchy linguistic environment. Int. J. Intell. Syst. 2021, 36, 7731–7753. [Google Scholar] [CrossRef]
  48. Cheng, F.; Liang, H.; Niu, B.; Zhao, N.; Zhao, X. Adaptive neural self-triggered bipartite secure control for nonlinear MASs subject to DoS attacks. Inf. Sci. 2023, 631, 256–270. [Google Scholar] [CrossRef]
Figure 1. Output of a feed-forward neural network.
Figure 1. Output of a feed-forward neural network.
Mathematics 11 02972 g001
Figure 3. Output of a feed-forward double-hierarchy linguistic neural network.
Figure 3. Output of a feed-forward double-hierarchy linguistic neural network.
Mathematics 11 02972 g003
Figure 4. Steps of GRA method, extended TOPSIS method, and our proposed method.
Figure 4. Steps of GRA method, extended TOPSIS method, and our proposed method.
Mathematics 11 02972 g004
Figure 5. Comparison of Extended TOPSIS Method, GRA Method, and Our Proposed Method.
Figure 5. Comparison of Extended TOPSIS Method, GRA Method, and Our Proposed Method.
Mathematics 11 02972 g005
Table 1. Hidden layer of feed-forward neural network for DHLTSs.
Table 1. Hidden layer of feed-forward neural network for DHLTSs.
p 1 p 2 p 3 p 4 p 5
g 1 0.7385214290.3264233330.2123685330.1369782630.1258662
g 2 0.9696786670.3689165450.3337682730.1548112220.095050909
g 3 0.5701086670.4875156670.1857397330.1046340950.087066391
M 2.2783087621.1828555450.7318765390.3964235810.3079835
0.6949646680.5418844820.4225916360.2838849090.235464362
Table 2. Output layer of feed-forward neural network for DHLTSs.
Table 2. Output layer of feed-forward neural network for DHLTSs.
q 1 q 2 q 3 q 4 q 5 q 6 q 7
DHLTYDWA0.619040.662700.640470.639110.667730.639610.62284
DHLTYDOWA0.620060.669720.649230.639420.674060.649740.62513
DHLTYDHWA0.516880.521620.518560.519120.521490.518570.51747
Table 3. Output of extended TOPSIS method for DHLTSs.
Table 3. Output of extended TOPSIS method for DHLTSs.
q 1 q 2 q 3 q 4 q 5 q 6 q 7
d i j + 0.215840.038570.149640.149330.048550.145780.22714
d i j 0.075700.270660.14190.142220.242990.145760.0644
output0.259670.875260.486720.487810.833470.499970.22089
raking0.875260.833470.499970.487810.486720.259670.22089
Table 4. Output of GRA method for DHLTSs.
Table 4. Output of GRA method for DHLTSs.
q 1 q 2 q 3 q 4 q 5 q 6 q 7
d i j + 0.588420.878690.6420250.706320.867930.664030.60736
d i j 0.794240.527240.734480.706240.573060.6770980.85416
output0.425570.624990.466420.500030.602310.495130.41556
raking0.624990.602310.500030.495130.466420.425570.41556
Table 5. Output ranking of feed-forward neural network for DHLTSs.
Table 5. Output ranking of feed-forward neural network for DHLTSs.
DHLTYDWA q 5 > q 2 > q 3 > q 6 > q 4 > q 7 > q 1
DHLTYDOWA q 5 > q 2 > q 6 > q 3 > q 4 > q 7 > q 1
DHLTYDHWA q 2 > q 5 > q 4 > q 6 > q 3 > q 7 > q 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abdullah, S.; Almagrabi, A.O.; Ali, N. A New Method for Commercial-Scale Water Purification Selection Using Linguistic Neural Networks. Mathematics 2023, 11, 2972. https://doi.org/10.3390/math11132972

AMA Style

Abdullah S, Almagrabi AO, Ali N. A New Method for Commercial-Scale Water Purification Selection Using Linguistic Neural Networks. Mathematics. 2023; 11(13):2972. https://doi.org/10.3390/math11132972

Chicago/Turabian Style

Abdullah, Saleem, Alaa O. Almagrabi, and Nawab Ali. 2023. "A New Method for Commercial-Scale Water Purification Selection Using Linguistic Neural Networks" Mathematics 11, no. 13: 2972. https://doi.org/10.3390/math11132972

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop