Abstract
Probabilistic functional equations have been used to analyze various models in computational biology and learning theory. It is worth noting that they are linked to the symmetry of a system of functional equations’ transformation. Our objective is to propose a generic probabilistic functional equation that can cover most of the mathematical models addressed in the existing literature. The notable fixed-point tools are utilized to examine the existence, uniqueness, and stability of the suggested equation’s solution. Two examples are also given to emphasize the significance of our findings.
1. Introduction
In an animal or human being, the learning phase may often be viewed as a series of choices between multiple possible reactions. Even in basic repetitive experiments under strictly regulated conditions, preference chains are mostly volatile, recommending that the probability governs the choice of feedback. It is also helpful to identify structural adjustments in the series of alternatives that reflect changes in trial-to-trial outcomes. From this perspective, most of the learning analysis explains the probability of a trial-to-test occurrence that describes a stochastic mechanism.
Experiments in mathematical learning have recently shown that the behavior of a basic learning experiment follows a stochastic model. Thus, it is not a novel idea (for detail, see [1,2]). However, following 1950, two crucial characteristics emerged mainly in the Bush, Estes, and Mosteller study. Firstly, one of the suggested models’ most important features is the inclusive character of the learning process. Second, such models can be examined in this way so that they cannot hide their statistical features.
Symmetries have emerged in mathematical formulations many times, and they have been shown to be important for solving problems or furthering research. It is possible to see high-quality research that uses nontrivial mathematics and related geometries in the context of important issues from a wide range of fields.
In learning theory and mathematical biology, the solution to the subsequent equation is of great importance
where is an unknown and are the learning-rate parameters that measure the effectiveness of the responses in a two-choice situation.
In 1976, Istrăţescu [3] used the above functional equation to inspect the involvement of predatory animals that prey on two distinct types of prey. Markov transitions were used to describe this behavior by converting the states x and to and with and .
Bush and Wilson [1] used such operators to examine the movement of a fish in two-choice circumstances. They claimed that under such behavior, there are four possible events: left-reward, right-nonreward, right-reward, left-nonreward.
It is widely assumed that getting rewarded on one side would increase the probability of that side being selected in the following trial. However, the reasoning for non-rewarded trials is less apparent. According to an extinction or reinforcement theory (see Table 1), the probability of choosing an unrewarded side in the next trial would decrease. In contrast, a model that relies on habit formation or secondary reinforcement (see Table 2) would suggest that simply choosing a side would increase the probability of selecting that side in the upcoming trials.
Table 1.
Operators for reinforcement-extinction model.
Table 2.
Operators for habit formation model.
In 2015, Berinde and Khan [4] generalized the above idea by proposing the following functional equation
where and are given contraction mappings with and .
Recently, Turab and Sintunavarat [5] utilized the above ideas and suggested the functional equation stated below
where is an unknown, and . The aforementioned functional equation was used to study a specific kind of psychological resistance of dogs enclosed in a small box.
Several other studies on human and animal actions in probability-learning scenarios have produced different results (see [6,7,8,9,10,11,12]).
Here, by following the above work with the four possible events (right-reward, right-nonreward, left-reward, left-nonreward) discussed by Bush and Wilson [1], we propose the following general functional equation
for all where is an unknown and are given mappings. In addition, is a non-expansive mapping with and for .
Our objective is to prove the existence, uniqueness, Hyers–Ulam (HU)- and Hyers–Ulam–Rassias (HUR)-type stability conclusions of Equation (4) by using the appropriate fixed-point method. Following that, we provide two examples to demonstrate the importance of our findings.
The following stated outcome will be required in the advancement.
Theorem 1
([13]). Let be a complete metric space and be a Banach contraction mapping (BCM) defined by
for some and for all Then has only one fixed point. Furthermore, the Picard iteration (PI) in , defined as for all , where , converges to the unique fixed point of .
2. Main Results
Let where with . We indicate the class consisting of all continuous real-valued functions by such that and
We can see that is a normed space (for the detail, see [4,12]), where is given by
for all .
For computational convenience, we write (4) as
where is an unknown function such that . In addition, are BCMs with contractive coefficients , respectively, and accomplishing the conditions
The primary goal of this part is to use fixed-point techniques to determine the existence and uniqueness results of (7). We begin with the outcome stated below.
Theorem 2.
Consider the probabilistic functional Equation (7) with (8). Suppose that and such that
where and Assume that there is a nonempty subset of such that is a Banach space (BS), where is given in (6), and the mapping from to defined for each by
for all is a self mapping. Then is a BCM with the metric d induced by .
Proof.
Let be a metric induced by on . Thus is a complete metric space. We deal with the operator from defined in (10).
In addition, is continuous and for all Therefore, is a self operator on Furthermore, it is clear that the solution of (7) is equivalent to the fixed point of . Since is a linear mapping, so for , we obtain
where . Thus, to evaluate , we mark the following framework
where . To obtain this, let , and for each with we obtain
Our aim is to use the definition of the norm (6) here. Therefore, by utilizing (8) with the condition we have
As are contraction mappings with the contractive coefficients respectively, we obtain
Hence,
where is defined in (9). This gives that
As this implies that is a BCM with metric d induced by . □
Theorem 3.
Consider the probabilistic Equation (7) with (8). Suppose that and where is defined in (9). Assume that there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each by (10) is a self mapping. Subsequently, the probabilistic Equation (7) with (8) has a unique solution in . Furthermore, the iteration in defined by
for all where , converges to the unique solution of (7).
Proof.
From Theorem 2, it is clear that defined for each by (10) is a BCM with metric d induced by . Thus, by utilizing the Banach fixed-point theorem, we get the conclusion of this theorem. □
A similar estimation approach has been applied in a group control system (for the detail, see [14]).
We shall look at a unique situation here. If are contraction mappings with contractive coefficients , respectively, then by Theorems 2 and 3, the outcomes are as follows.
Corollary 1.
Corollary 2.
Consider the probabilistic Equation (7) associated with (8). Assume that with and there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each by (12) is a self mapping. Subsequently, the probabilistic Equation (7) with (8) has a unique solution in . Furthermore, the iteration in given as
for all where , converges to the unique solution of (7).
The conditions and are sufficient, but not necessary, to prove the main results. In the following results, we use different conditions to prove the main conclusion.
Theorem 4.
Consider the probabilistic Equation (7) with (8). Assume that there exist such that
and that where
with and Suppose that there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each by
for all is a self mapping. Then is a BCM with the metric d induced by .
Proof.
Let be a metric induced by on . Thus is a complete metric space. We deal with the operator from defined in (16).
In addition, is continuous and for all Therefore, is a self operator on Furthermore, it is clear that the solution of (7) is equivalent to the fixed point of . Since is a linear mapping, so for , we obtain
where . Thus, to evaluate , we mark the following framework
where . To obtain this, let , and for each with we obtain
As are contraction mappings with the contractive coefficients respectively, we obtain
where is defined in (15). This gives that
As this implies that is a BCM with metric d induced by . □
Theorem 5.
Consider the probabilistic functional Equation (7) with (8). Suppose that (14) holds and where is defined in (15). Assume that there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each by (16) is a self mapping. Subsequently, the probabilistic Equation (7) with (8) has a unique solution in . Furthermore, the iteration in can be defined by
for all where , converges to the unique solution of (7).
Proof.
From Theorem 4, it is clear that defined for each by (16) is a BCM with metric d induced by . Thus, by utilizing the Banach fixed-point theorem, we get the conclusion of this theorem. □
We shall look at a unique situation here. If are contraction mappings with contractive coefficients , respectively, then by Theorems 4 and 5, the outcomes are as follows.
Corollary 3.
Consider the probabilistic functional Equation (7) associated with (8). Assume that there exist defined in (14) and where
Suppose that there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each by
for all is a self mapping. Then is a BCM with the metric d induced by .
Corollary 4.
Consider the probabilistic Equation (7) associated with (8). Assume that (14) holds and where is defined in (18). Suppose that there is a nonempty subset of such that is a BS, where is given in (6), and the mapping from to defined for each by (19) is a self mapping. Then, the functional Equation (7) with (8) has a unique solution in . Furthermore, the iteration in is defined as
for all where , converges to the unique solution of (7).
Remark 1.
Our proposed probabilistic Equation (7) is a generalization of the functional equations discussed in [6,8].
We now offer the following examples to show the significance of our results.
Example 1.
Consider the probabilistic functional equation given below
for all with and . If we set the mappings by
for all , then our Equation (7) reduces to the Equation (21). It is easy to see that satisfy our boundary conditions (8), and . In addition,
for all . This implies that are contraction mappings with coefficients
respectively, and is a non-expansive mapping with . If
and there is a nonempty set of such that is a BS, and the mapping from given in (21) for all is a self mapping. Then, all constraints of Theorem 2 are fulfilled, and therefore, we get the existence of a solution to the functional Equation (21).
If we define (whereas is an identity function), by considering as an initial approximation , then by Theorem 3, the next iteration converges to a unique solution of (21):
for all .
Example 2.
Consider the probabilistic functional equation given below
for all with and . If we set the mappings by
for all , then our Equation (7) reduces to the Equation (22). It is easy to see that and satisfy our boundary conditions (8). In addition,
for all . This implies that are contraction mappings with coefficients
respectively, and is a non-expansive mapping with . Furthermore,
and there is a nonempty set of such that is a BS, and the mapping from given in (22) for all is a self mapping. Then, all hypotheses of Theorem 4 are fulfilled, and therefore, we get the existence of a solution to the functional Equation (22).
If we define as an initial approximation, then by Theorem 5, the next iteration converges to a unique solution of (22):
for all .
3. Stability Analysis of the Suggested Functional Equation
In mathematical modeling theory, the consistency of solutions is critical. Slight changes in the data set, such as those caused by natural measurement mistakes, have no corresponding impact on the conclusion. Hence, it is essential to analyze the stability of a solution to the suggested functional (7). For the details of the HU and HUR stability, we refer to [15,16,17,18,19].
Theorem 6.
Under the hypothesis of Theorem 2, the equation where , is defined as
for all and has HUR stability; that is, for a fixed function , we have that for every with , there exists a unique such that and for some where is given in (9).
Proof.
Let such that . By Theorem 2, we have a unique such that . Thus, we obtain
and hence
□
From the above analysis, we get the subsequent result related to the HU stability.
Corollary 5.
Under the hypothesis of Theorem 2, the equation where is defined as
for all and has HU stability; that is, for a fixed , we have that for every with , there exists a unique such that and for some .
4. Conclusions
The predator–prey analogy is among the most appealing paradigms in a two-choice scenario emerging in mathematical biology. In such models, a predator has two possible prey choices, and the solution occurs when the predator is attracted to a particular type of prey. In this paper, we proposed a general functional equation that can cover numerous learning theory models in the existing literature. We also discussed the existence, uniqueness, and stability results of the suggested functional equation. The functional equations that appeared in [3,4,8] focused on just two cases, while our proposed functional Equation (4) covers all the possible cases discussed by Bush and Wilson in [1]. In addition, in [3,4,12], the authors used the boundary conditions and to prove their main results, but in Theorem 4, we did not employ such assumptions. Therefore, our method is novel and can be applied to many mathematical models associated with mathematical psychology and learning theory.
To conclude, we propose the following open problem for the interested readers.
Question: Can we use another method to prove the conclusions of Theorems 2 and 3?
Author Contributions
Conceptualization, A.T. and W.-G.P.; methodology, A.T. and W.-G.P.; validation, A.T., W.-G.P. and W.A.; investigation, A.T., W.-G.P. and W.A.; writing—original draft preparation, A.T., W.-G.P. and W.A.; writing—review and editing, A.T., W.-G.P. and W.A.; project administration, A.T., W.-G.P. and W.A.; funding acquisition, A.T., W.-G.P. and W.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Bush, A.A.; Wilson, T.R. Two-choice behavior of paradise fish. J. Exp. Psychol. 1956, 51, 315–322. [Google Scholar] [CrossRef] [PubMed]
- Bush, R.; Mosteller, F. Stochastic Models for Learning; Wiley: New York, NY, USA, 1955. [Google Scholar]
- Istrăţescu, V.I. On a functional equation. J. Math. Anal. Appl. 1976, 56, 133–136. [Google Scholar] [CrossRef]
- Berinde, V.; Khan, A.R. On a functional equation arising in mathematical biology and theory of learning. Creat. Math. Inform. 2015, 24, 9–16. [Google Scholar] [CrossRef]
- Turab, A.; Sintunavarat, W. On the solution of the traumatic avoidance learning model approached by the Banach fixed point theorem. J. Fixed Point Theory Appl. 2020, 22, 50. [Google Scholar] [CrossRef]
- Turab, A.; Sintunavarat, W. On a solution of the probabilistic predator–prey model approached by the fixed point methods. J. Fixed Point Theory Appl. 2020, 22, 64. [Google Scholar] [CrossRef]
- Estes, W.K.; Straughan, J.H. Analysis of a verbal conditioning situation in terms of statistical learning theory. J. Exp. Psychol. 1954, 47, 225–234. [Google Scholar] [CrossRef] [PubMed]
- Turab, A.; Sintunavarat, W. On the solutions of the two preys and one predator type model approached by the fixed point theory. Sādhanā 2020, 45, 211. [Google Scholar] [CrossRef]
- Grant, D.A.; Hake, H.W.; Hornseth, J.P. Acquisition and extinction of a verbal conditioned response with differing percentages of reinforcement. J. Exp. Psychol. 1951, 42, 1–5. [Google Scholar] [CrossRef] [PubMed]
- Humphreys, L.G. Acquisition and extinction of verbal expectations in a situation analogous to conditioning. J. Exp. Psychol. 1939, 25, 294–301. [Google Scholar] [CrossRef]
- Jarvik, M.E. Probability learning and a negative recency effect in the serial anticipation of alternative symbols. J. Exp. Psychol. 1951, 41, 291–297. [Google Scholar] [CrossRef] [PubMed]
- Turab, A.; Sintunavarat, W. On analytic model for two-choice behavior of the paradise fish based on the fixed point method. J. Fixed Point Theory Appl. 2019, 21, 56. [Google Scholar] [CrossRef]
- Banach, S. Sur les operations dans les ensembles abstraits et leur applications aux equations integrales. Fund. Math. 1922, 3, 133–181. [Google Scholar] [CrossRef]
- Shang, Y. L1 group consensus of multi-agent systems with stochastic inputs under directed interaction topology. Int. J. Control 2013, 86, 1–8. [Google Scholar] [CrossRef]
- Hyers, D.H.; Isac, G.; Rassias, T.M. Stability of Functional Equations in Several Variables; Birkhauser: Basel, Switzerland, 1998. [Google Scholar]
- Morales, J.S.; Rojas, E.M. Hyers-Ulam and Hyers-Ulam-Rassias stability of nonlinear integral equations with delay. Int. J. Nonlinear Anal. Appl. 2011, 2, 1–6. [Google Scholar]
- Rassias, T.M. On the stability of the linear mapping in Banach spaces. Proc. Am. Math. Soc. 1978, 72, 297–300. [Google Scholar] [CrossRef]
- Bae, J.H.; Park, W.G. A fixed point approach to the stability of a Cauchy-Jensen functional equation. Abst. Appl. Anal. 2012, 2012, 1–10. [Google Scholar] [CrossRef]
- Gachpazan, M.; Bagdani, O. Hyers-Ulam stability of nonlinear integral equation. Fixed Point Theory Appl. 2010, 2010, 1–6. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).