Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = Markov games with private information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 352 KB  
Article
Stationary Bayesian–Markov Equilibria in Bayesian Stochastic Games with Periodic Revelation
by Eunmi Ko
Games 2024, 15(5), 31; https://doi.org/10.3390/g15050031 - 11 Sep 2024
Viewed by 1339
Abstract
I consider a class of dynamic Bayesian games in which types evolve stochastically according to a first-order Markov process on a continuous type space. Types are privately informed, but they become public together with actions when payoffs are obtained, resulting in a delayed [...] Read more.
I consider a class of dynamic Bayesian games in which types evolve stochastically according to a first-order Markov process on a continuous type space. Types are privately informed, but they become public together with actions when payoffs are obtained, resulting in a delayed information revelation. In this environment, I show that there exists a stationary Bayesian–Markov equilibrium in which a player’s strategy maps a tuple of the previous type and action profiles and the player’s current type to a mixed action. The existence can be extended to K-periodic revelation. I also offer a computational algorithm to find an equilibrium. Full article
Show Figures

Figure 1

15 pages, 499 KB  
Article
Dynamic Mechanism Design for Repeated Markov Games with Hidden Actions: Computational Approach
by Julio B. Clempner
Math. Comput. Appl. 2024, 29(3), 46; https://doi.org/10.3390/mca29030046 - 10 Jun 2024
Viewed by 1486
Abstract
This paper introduces a dynamic mechanism design tailored for uncertain environments where incentive schemes are challenged by the inability to observe players’ actions, known as moral hazard. In these scenarios, the system operates as a Markov game where outcomes depend on both the [...] Read more.
This paper introduces a dynamic mechanism design tailored for uncertain environments where incentive schemes are challenged by the inability to observe players’ actions, known as moral hazard. In these scenarios, the system operates as a Markov game where outcomes depend on both the state of payouts and players’ actions. Moral hazard and adverse selection further complicate decision-making. The proposed mechanism aims to incentivize players to truthfully reveal their states while maximizing their expected payoffs. This is achieved through players’ best-reply strategies, ensuring truthful state revelation despite moral hazard. The revelation principle, a core concept in mechanism design, is applied to models with both moral hazard and adverse selection, facilitating optimal reward structure identification. The research holds significant practical implications, addressing the challenge of designing reward structures for multiplayer Markov games with hidden actions. By utilizing dynamic mechanism design, researchers and practitioners can optimize incentive schemes in complex, uncertain environments affected by moral hazard. To demonstrate the approach, the paper includes a numerical example of solving an oligopoly problem. Oligopolies, with a few dominant market players, exhibit complex dynamics where individual actions impact market outcomes significantly. Using the dynamic mechanism design framework, the paper shows how to construct optimal reward structures that align players’ incentives with desirable market outcomes, mitigating moral hazard and adverse selection effects. This framework is crucial for optimizing incentive schemes in multiplayer Markov games, providing a robust approach to handling the intricacies of moral hazard and adverse selection. By leveraging this design, the research contributes to the literature by offering a method to construct effective reward structures even in complex and uncertain environments. The numerical example of oligopolies illustrates the practical application and effectiveness of this dynamic mechanism design. Full article
Show Figures

Figure 1

15 pages, 536 KB  
Article
Analytical Method for Mechanism Design in Partially Observable Markov Games
by Julio B. Clempner and Alexander S. Poznyak
Mathematics 2021, 9(4), 321; https://doi.org/10.3390/math9040321 - 6 Feb 2021
Cited by 12 | Viewed by 2404
Abstract
A theme that become common knowledge of the literature is the difficulty of developing a mechanism that is compatible with individual incentives that simultaneously result in efficient decisions that maximize the total reward. In this paper, we suggest an analytical method for computing [...] Read more.
A theme that become common knowledge of the literature is the difficulty of developing a mechanism that is compatible with individual incentives that simultaneously result in efficient decisions that maximize the total reward. In this paper, we suggest an analytical method for computing a mechanism design. This problem is explored in the context of a framework, in which the players follow an average utility in a non-cooperative Markov game with incomplete state information. All of the Nash equilibria are approximated in a sequential process. We describe a method for the derivative of the player’s equilibrium that instruments the design of the mechanism. In addition, it showed the convergence and rate of convergence of the proposed method. For computing the mechanism, we consider an extension of the Markov model for which it is introduced a new variable that represents the product of the mechanism design and the joint strategy. We derive formulas to recover the variables of interest: mechanisms, strategy, and distribution vector. The mechanism design and equilibrium strategies computation differ from those in previous literature. A numerical example presents the usefulness and effectiveness of the proposed method. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

Back to TopTop