Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (42)

Search Parameters:
Keywords = non-zero-sum game

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1035 KB  
Article
A Strength Allocation Bayesian Game Method for Swarming Unmanned Systems
by Lingwei Li and Bangbang Ren
Drones 2025, 9(9), 626; https://doi.org/10.3390/drones9090626 - 5 Sep 2025
Viewed by 165
Abstract
This paper investigates a swarming strength allocation Bayesian game approach under incomplete information to address the high-value targets protection problem of swarming unmanned systems. The swarming strength allocation Bayesian game model is established by analyzing the non-zero sum incomplete information game mechanism during [...] Read more.
This paper investigates a swarming strength allocation Bayesian game approach under incomplete information to address the high-value targets protection problem of swarming unmanned systems. The swarming strength allocation Bayesian game model is established by analyzing the non-zero sum incomplete information game mechanism during the protection process, considering high-tech and low-tech interception players. The model incorporates a game benefit quantification method based on an improved Lanchester equation. The method regards massive swarm individuals as a collective unit for overall cost calculation, thus avoiding the curse of dimensionality from increasing numbers of individuals. Based on it, a Bayesian Nash equilibrium solving approach is presented to determine the optimal swarming strength allocation for the protection player. Finally, compared with random allocation, greedy heuristic, rule-based assignment, and Colonel Blotto game, the simulations demonstrate the proposed method’s robustness in large-scale strength allocation. Full article
(This article belongs to the Collection Drones for Security and Defense Applications)
Show Figures

Figure 1

60 pages, 1430 KB  
Article
The Effect of the Cost Functional on Asymptotic Solution to One Class of Zero-Sum Linear-Quadratic Cheap Control Differential Games
by Valery Y. Glizer and Vladimir Turetsky
Symmetry 2025, 17(9), 1394; https://doi.org/10.3390/sym17091394 - 26 Aug 2025
Viewed by 307
Abstract
A finite-horizon zero-sum linear-quadratic differential game with non-homogeneous dynamics is considered. The key feature of this game is as follows. The cost of the control of the minimizing player (the minimizer) in the game’s cost functional is much smaller than the cost of [...] Read more.
A finite-horizon zero-sum linear-quadratic differential game with non-homogeneous dynamics is considered. The key feature of this game is as follows. The cost of the control of the minimizing player (the minimizer) in the game’s cost functional is much smaller than the cost of the control of the maximizing player (the maximizer) and the cost of the state variable. This smallness is due to a positive small multiplier (a small parameter) for the quadratic form of the minimizer’s control in the integrand of the cost functional. Two cases of the game’s cost functional are studied: (i) the current state cost in the integrand of the cost functional is a positive definite quadratic form; (ii) the current state cost in the integrand of the cost functional is a positive semi-definite (but non-zero) quadratic form. The latter case has not yet been considered in the literature devoted to the analysis of cheap control differential games. For each of the aforementioned cases, an asymptotic approximation (by the small parameter) of the solution to the considered game is derived. It is established that the property of the aforementioned state cost (positive definiteness/positive semi-definiteness) has an essential effect on the asymptotic analysis and solution of the differential equations (Riccati-type, linear, and trivial), appearing in the solvability conditions of the considered game. The cases (i) and (ii) require considerably different approaches to the derivation of the asymptotic solutions to these differential equations. Moreover, the case (ii) requires developing a significantly novel approach. The asymptotic solutions of the aforementioned differential equations considerably differ from each other in cases (i) and (ii). This difference yields essentially different asymptotic solutions (saddle point and value) of the considered game in these cases, meaning it is of crucial importance to distinguish cases (i) and (ii) in the study of various theoretical and real-life cheap control zero-sum linear-quadratic differential games. The asymptotic solutions of the considered game in cases (i) and (ii) are compared with each other. An academic illustrative example is presented. Full article
Show Figures

Figure 1

20 pages, 2431 KB  
Article
Game Theory-Based Leader–Follower Tracking Control for an Orbital Pursuit–Evasion System with Tethered Space Net Robots
by Zhanxia Zhu, Chuang Wang and Jianjun Luo
Aerospace 2025, 12(8), 710; https://doi.org/10.3390/aerospace12080710 - 11 Aug 2025
Viewed by 330
Abstract
The tethered space net robot offers an effective solution for active space debris removal due to its large capture envelope. However, most existing studies overlook the evasive behavior of non-cooperative targets. To address this, we model an orbital pursuit–evasion game involving a tethered [...] Read more.
The tethered space net robot offers an effective solution for active space debris removal due to its large capture envelope. However, most existing studies overlook the evasive behavior of non-cooperative targets. To address this, we model an orbital pursuit–evasion game involving a tethered net and propose a game theory-based leader–follower tracking control strategy. In this framework, a virtual leader—defined as the geometric center of four followers—engages in a zero-sum game with the evader. An adaptive dynamic programming method is employed to handle input saturation and compute the Nash Equilibrium strategy. In the follower formation tracking phase, a synchronous distributed model predictive control approach is proposed to update all followers’ control simultaneously, ensuring accurate tracking while meeting safety constraints. The feasibility and stability of the proposed method are theoretically analyzed. Additionally, a body-fixed reference frame is introduced to reduce the capture angle. Simulation results show that the proposed strategy successfully captures the target and outperforms existing methods in both formation keeping and control efficiency. Full article
(This article belongs to the Special Issue Dynamics and Control of Space On-Orbit Operations)
Show Figures

Figure 1

19 pages, 3110 KB  
Article
A Stackelberg Game Approach to Model Reference Adaptive Control for Spacecraft Pursuit–Evasion
by Gena Gan, Ming Chu, Huayu Zhang and Shaoqi Lin
Aerospace 2025, 12(7), 613; https://doi.org/10.3390/aerospace12070613 - 7 Jul 2025
Viewed by 480
Abstract
A Stackelberg equilibrium–based Model Reference Adaptive Control (MSE) method is proposed for spacecraft Pursuit–Evasion (PE) games with incomplete information and sequential decision making under a non–zero–sum framework. First, the spacecraft PE dynamics under J2 perturbation are mapped to a dynamic Stackelberg game [...] Read more.
A Stackelberg equilibrium–based Model Reference Adaptive Control (MSE) method is proposed for spacecraft Pursuit–Evasion (PE) games with incomplete information and sequential decision making under a non–zero–sum framework. First, the spacecraft PE dynamics under J2 perturbation are mapped to a dynamic Stackelberg game model. Next, the Riccati equation solves the equilibrium problem, deriving the evader’s optimal control strategy. Finally, a model reference adaptive algorithm enables the pursuer to dynamically adjust its control gains. Simulations show that the MSE strategy outperforms Nash Equilibrium (NE) and Single–step Prediction Stackelberg Equilibrium (SSE) methods, achieving 25.46% faster convergence than SSE and 39.11% lower computational cost than NE. Full article
(This article belongs to the Section Astronautics & Space Science)
Show Figures

Figure 1

16 pages, 6116 KB  
Article
Policy Similarity Measure for Two-Player Zero-Sum Games
by Hongsong Tang, Liuyu Xiang and Zhaofeng He
Appl. Sci. 2025, 15(5), 2815; https://doi.org/10.3390/app15052815 - 5 Mar 2025
Viewed by 983
Abstract
Policy space response oracles (PSRO) is an important algorithmic framework for approximating Nash equilibria in two-player zero-sum games. Enhancing policy diversity has been shown to improve the performance of PSRO in this approximation process significantly. However, existing diversity metrics are often prone to [...] Read more.
Policy space response oracles (PSRO) is an important algorithmic framework for approximating Nash equilibria in two-player zero-sum games. Enhancing policy diversity has been shown to improve the performance of PSRO in this approximation process significantly. However, existing diversity metrics are often prone to redundancy, which can hinder optimal strategy convergence. In this paper, we introduce the policy similarity measure (PSM), a novel approach that combines Gaussian and cosine similarity measures to assess policy similarity. We further incorporate the PSM into the PSRO framework as a regularization term, effectively fostering a more diverse policy population. We demonstrate the effectiveness of our method in two distinct game environments: a non-transitive mixture model and Leduc poker. The experimental results show that the PSM-augmented PSRO outperforms baseline methods in reducing exploitability by approximately 7% and exhibits greater policy diversity in visual analysis. Ablation studies further validate the benefits of combining Gaussian and cosine similarities in cultivating more diverse policy sets. This work provides a valuable method for measuring and improving the policy diversity in two-player zero-sum games. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

37 pages, 427 KB  
Article
Structured Equilibria for Dynamic Games with Asymmetric Information and Dependent Types
by Nasimeh Heydaribeni and Achilleas Anastasopoulos
Games 2025, 16(2), 12; https://doi.org/10.3390/g16020012 - 3 Mar 2025
Viewed by 2047
Abstract
We consider a dynamic game with asymmetric information where each player privately observes a noisy version of a (hidden) state of the world V, resulting in dependent private observations. We study the structured perfect Bayesian equilibria (PBEs) that use private beliefs in [...] Read more.
We consider a dynamic game with asymmetric information where each player privately observes a noisy version of a (hidden) state of the world V, resulting in dependent private observations. We study the structured perfect Bayesian equilibria (PBEs) that use private beliefs in their strategies as sufficient statistics for summarizing their observation history. The main difficulty in finding the appropriate sufficient statistic (state) for the structured strategies arises from the fact that players need to construct (private) beliefs on other players’ private beliefs on V, which, in turn, would imply that one needs to construct an infinite hierarchy of beliefs, thus rendering the problem unsolvable. We show that this is not the case: each player’s belief on other players’ beliefs on V can be characterized by her own belief on V and some appropriately defined public belief. We then specialize this setting to the case of a Linear Quadratic Gaussian (LQG) non-zero-sum game, and we characterize structured PBEs with linear strategies that can be found through a backward/forward algorithm akin to dynamic programming for the standard LQG control problem. Unlike the standard LQG problem, however, some of the required quantities for the Kalman filter are observation-dependent and, thus, cannot be evaluated offline through a forward recursion. Full article
(This article belongs to the Section Learning and Evolution in Games)
Show Figures

Figure A1

17 pages, 8704 KB  
Article
Event-Trigger Reinforcement Learning-Based Coordinate Control of Modular Unmanned System via Nonzero-Sum Game
by Yebao Liu, Tianjiao An, Jianguo Chen, Luyang Zhong and Yuhan Qian
Sensors 2025, 25(2), 314; https://doi.org/10.3390/s25020314 - 7 Jan 2025
Viewed by 883
Abstract
Decreasing the position error and control torque is important for the coordinate control of a modular unmanned system with less communication burden between the sensor and the actuator. Therefore, this paper proposes event-trigger reinforcement learning (ETRL)-based coordinate control of a modular unmanned system [...] Read more.
Decreasing the position error and control torque is important for the coordinate control of a modular unmanned system with less communication burden between the sensor and the actuator. Therefore, this paper proposes event-trigger reinforcement learning (ETRL)-based coordinate control of a modular unmanned system (MUS) via the nonzero-sum game (NZSG) strategy. The dynamic model of the MUS is established via joint torque feedback (JTF) technology. Based on the NZSG strategy, the existing coordinate control problem is transformed into an RL issue. With the help of the ET mechanism, the periodic communication mechanism of the system is avoided. The ET-critic neural network (NN) is used to approximate the performance index function, thus obtaining the ETRL coordinate control policy. The stability of the closed-loop system is verified via Lyapunov’s theorem. Experiment results demonstrate the validity of the proposed method. The experimental results show that the proposed method reduces the position error by 30% and control torque by 10% compared with the existing control methods. Full article
(This article belongs to the Special Issue Smart Sensing and Control for Autonomous Intelligent Unmanned Systems)
Show Figures

Figure 1

9 pages, 240 KB  
Article
Two-Player Nonzero-Sum Stochastic Differential Games with Switching Controls
by Yongxin Liu and Hui Min
Mathematics 2024, 12(24), 3976; https://doi.org/10.3390/math12243976 - 18 Dec 2024
Viewed by 884
Abstract
In this paper, a two-player nonzero-sum stochastic differential game problem is studied with both players using switching controls. A verification theorem associated with a set of variational inequalities is established as a sufficient criterion for Nash equilibrium, in which the equilibrium switching [...] Read more.
In this paper, a two-player nonzero-sum stochastic differential game problem is studied with both players using switching controls. A verification theorem associated with a set of variational inequalities is established as a sufficient criterion for Nash equilibrium, in which the equilibrium switching strategies for the two players, indicating when and where it is optimal to switch, are characterized in terms of the so-called switching regions and continuation regions. The verification theorem is proved in a piecewise way along the sequence of total decision times of the two players. Then, some detailed explanations are also provided to illustrate the idea why the conditions are imposed in the verification theorem. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

29 pages, 1191 KB  
Article
Integral Reinforcement Learning-Based Online Adaptive Dynamic Event-Triggered Control Design in Mixed Zero-Sum Games for Unknown Nonlinear Systems
by Yuling Liang, Zhi Shao, Hanguang Su, Lei Liu and Xiao Mao
Mathematics 2024, 12(24), 3916; https://doi.org/10.3390/math12243916 - 12 Dec 2024
Viewed by 976
Abstract
Mixed zero-sum games consider both zero-sum and non-zero-sum differential game problems simultaneously. In this paper, multiplayer mixed zero-sum games (MZSGs) are studied by the means of an integral reinforcement learning (IRL) algorithm under the dynamic event-triggered control (DETC) mechanism for completely unknown nonlinear [...] Read more.
Mixed zero-sum games consider both zero-sum and non-zero-sum differential game problems simultaneously. In this paper, multiplayer mixed zero-sum games (MZSGs) are studied by the means of an integral reinforcement learning (IRL) algorithm under the dynamic event-triggered control (DETC) mechanism for completely unknown nonlinear systems. Firstly, the adaptive dynamic programming (ADP)-based on-policy approach is proposed for solving the MZSG problem for the nonlinear system with multiple players. Secondly, to avoid using dynamic information of the system, a model-free control strategy is developed by utilizing actor–critic neural networks (NNs) for addressing the MZSG problem of unknown systems. On this basis, for the purpose of avoiding wasted communication and computing resources, the dynamic event-triggered mechanism is integrated into the integral reinforcement learning algorithm, in which a dynamic triggering condition is designed to further reduce triggering times. With the help of the Lyapunov stability theorem, the system states and weight values of NNs are proven to be uniformly ultimately bounded (UUB) stable. Finally, two examples are demonstrated to show the effectiveness and feasibility of the developed control method. Compared with static event-triggering mode, the simulation results show that the number of actuator updates in the DETC mechanism has been reduced by 55% and 69%, respectively. Full article
Show Figures

Figure 1

46 pages, 1633 KB  
Article
Stochastic Differential Games and a Unified Forward–Backward Coupled Stochastic Partial Differential Equation with Lévy Jumps
by Wanyang Dai
Mathematics 2024, 12(18), 2891; https://doi.org/10.3390/math12182891 - 16 Sep 2024
Viewed by 2191
Abstract
We establish a relationship between stochastic differential games (SDGs) and a unified forward–backward coupled stochastic partial differential equation (SPDE) with discontinuous Lévy Jumps. The SDGs have q players and are driven by a general-dimensional vector Lévy process. By establishing a vector-form Ito [...] Read more.
We establish a relationship between stochastic differential games (SDGs) and a unified forward–backward coupled stochastic partial differential equation (SPDE) with discontinuous Lévy Jumps. The SDGs have q players and are driven by a general-dimensional vector Lévy process. By establishing a vector-form Ito-Ventzell formula and a 4-tuple vector-field solution to the unified SPDE, we obtain a Pareto optimal Nash equilibrium policy process or a saddle point policy process to the SDG in a non-zero-sum or zero-sum sense. The unified SPDE is in both a general-dimensional vector form and forward–backward coupling manner. The partial differential operators in its drift, diffusion, and jump coefficients are in time-variable and position parameters over a domain. Since the unified SPDE is of general nonlinearity and a general high order, we extend our recent study from the existing Brownian motion (BM)-driven backward case to a general Lévy-driven forward–backward coupled case. In doing so, we construct a new topological space to support the proof of the existence and uniqueness of an adapted solution of the unified SPDE, which is in a 4-tuple strong sense. The construction of the topological space is through constructing a set of topological spaces associated with a set of exponents {γ1,γ2,} under a set of general localized conditions, which is significantly different from the construction of the single exponent case. Furthermore, due to the coupling from the forward SPDE and the involvement of the discontinuous Lévy jumps, our study is also significantly different from the BM-driven backward case. The coupling between forward and backward SPDEs essentially corresponds to the interaction between noise encoding and noise decoding in the current hot diffusion transformer model for generative AI. Full article
Show Figures

Figure 1

21 pages, 842 KB  
Article
Optimal Asymptotic Tracking Control for Nonzero-Sum Differential Game Systems with Unknown Drift Dynamics via Integral Reinforcement Learning
by Chonglin Jing, Chaoli Wang, Hongkai Song, Yibo Shi and Longyan Hao
Mathematics 2024, 12(16), 2555; https://doi.org/10.3390/math12162555 - 18 Aug 2024
Cited by 1 | Viewed by 1442
Abstract
This paper employs an integral reinforcement learning (IRL) method to investigate the optimal tracking control problem (OTCP) for nonlinear nonzero-sum (NZS) differential game systems with unknown drift dynamics. Unlike existing methods, which can only bound the tracking error, the proposed approach ensures that [...] Read more.
This paper employs an integral reinforcement learning (IRL) method to investigate the optimal tracking control problem (OTCP) for nonlinear nonzero-sum (NZS) differential game systems with unknown drift dynamics. Unlike existing methods, which can only bound the tracking error, the proposed approach ensures that the tracking error asymptotically converges to zero. This study begins by constructing an augmented system using the tracking error and reference signal, transforming the original OTCP into solving the coupled Hamilton–Jacobi (HJ) equation of the augmented system. Because the HJ equation contains unknown drift dynamics and cannot be directly solved, the IRL method is utilized to convert the HJ equation into an equivalent equation without unknown drift dynamics. To solve this equation, a critic neural network (NN) is employed to approximate the complex value function based on the tracking error and reference information data. For the unknown NN weights, the least squares (LS) method is used to design an estimation law, and the convergence of the weight estimation error is subsequently proven. The approximate solution of optimal control converges to the Nash equilibrium, and the tracking error asymptotically converges to zero in the closed system. Finally, we validate the effectiveness of the proposed method in this paper based on MATLAB using the ode45 method and least squares method to execute Algorithm 2. Full article
Show Figures

Figure 1

10 pages, 223 KB  
Article
Nash’s Existence Theorem for Non-Compact Strategy Sets
by Xinyu Zhang, Chunyan Yang, Renjie Han and Shiqing Zhang
Mathematics 2024, 12(13), 2017; https://doi.org/10.3390/math12132017 - 28 Jun 2024
Viewed by 924
Abstract
In this paper, we apply the classical FKKM lemma to obtain the Ky Fan minimax inequality defined on nonempty non-compact convex subsets in reflexive Banach spaces, and then we apply it to game theory and obtain Nash’s existence theorem for non-compact strategy sets, [...] Read more.
In this paper, we apply the classical FKKM lemma to obtain the Ky Fan minimax inequality defined on nonempty non-compact convex subsets in reflexive Banach spaces, and then we apply it to game theory and obtain Nash’s existence theorem for non-compact strategy sets, which can be regarded as a new, simple but interesting application of the FKKM lemma and the Ky Fan minimax inequality, and we can also present another proof about the famous John von Neumann’s existence theorem in two-player zero-sum games. Due to the results of Li, Shi and Chang, the coerciveness in the conclusion can be replaced with the P.S. or G.P.S. conditions. Full article
(This article belongs to the Special Issue Nonlinear Functional Analysis: Theory, Methods, and Applications)
14 pages, 1027 KB  
Article
Optimal Voltage Recovery Learning Control for Microgrids with N-Distributed Generations via Hybrid Iteration Algorithm
by Lüeshi Li, Ruizhuo Song and Shuqing Dong
Electronics 2024, 13(11), 2093; https://doi.org/10.3390/electronics13112093 - 28 May 2024
Viewed by 1254
Abstract
Considering that the nonlinearity and uncertainty of the microgrid model complicate the derivation and design of the optimal controller, an adaptive dynamic programming (ADP) algorithm is designed to solve the model-free non-zero-sum game. By combining the advantages of policy iteration and value iteration, [...] Read more.
Considering that the nonlinearity and uncertainty of the microgrid model complicate the derivation and design of the optimal controller, an adaptive dynamic programming (ADP) algorithm is designed to solve the model-free non-zero-sum game. By combining the advantages of policy iteration and value iteration, an optimal learning control scheme based on hybrid iteration is constructed to provide stringent real power sharing for the nonlinear and coupled microgrid systems with N-distributed generations. First, using non-zero-sum differential game strategy, a novel distributed secondary voltage recovery consensus optimal control protocol is built using a hybrid iteration method to realize the voltage recovery of microgrids. Then, the data of the system state and input are gathered along a dynamic system trajectory and a data-driven optimal controller learns the game solution without microgrid physics information, enhancing convenience and efficiency in practical applications. Furthermore, the convergence analysis is given in detail, and it is proved that the control protocol can converge to the optimal solution so as to ensure the stability of the voltage recovery of the microgrid system. Convergence analysis proves the convergence of the the protocol to the optimal solution, ensuring voltage recovery stability. Simulation results validate the feasibility and effectiveness of the proposed scheme. Full article
(This article belongs to the Special Issue Intelligent Mobile Robotic Systems: Decision, Planning and Control)
Show Figures

Figure 1

16 pages, 656 KB  
Article
Learning-Based Multi-Domain Anti-Jamming Communication with Unknown Information
by Yongcheng Li, Jinchi Wang and Zhenzhen Gao
Electronics 2023, 12(18), 3901; https://doi.org/10.3390/electronics12183901 - 15 Sep 2023
Cited by 6 | Viewed by 1603
Abstract
Due to the open nature of the wireless channel, wireless networks are vulnerable to jamming attacks. In this paper, we try to solve the anti-jamming problem caused by smart jammers, which can adaptively adjust the jamming channel and the jamming power. The interaction [...] Read more.
Due to the open nature of the wireless channel, wireless networks are vulnerable to jamming attacks. In this paper, we try to solve the anti-jamming problem caused by smart jammers, which can adaptively adjust the jamming channel and the jamming power. The interaction between the legitimate transmitter and the jammers is modeled as a non-zero-sum game. Considering that it is challenging for the transmitter and the jammers to acquire each other’s information, we propose two anti-jamming communication schemes based on the Deep Q-Network (DQN) algorithm and hierarchical learning (HL) algorithm to solve the non-zero-sum game. Specifically, the DQN-based scheme aims to solve the anti-jamming strategies in the frequency domain and the power domain directly, while the HL-based scheme tries to find the optimal mixed strategies for the Nash equilibrium. Simulation results are presented to validate the effectiveness of the proposed schemes. It is shown that the HL-based scheme has a better convergence performance and the DQN-based scheme has a higher converged utility of the transmitter. In the case of a single jammer, the DQN-based scheme achieves 80% of the transmitter’s utility of the no-jamming case, while the HL-based scheme achieves 63%. Full article
(This article belongs to the Special Issue Advances in Deep Learning-Based Wireless Communication Systems)
Show Figures

Figure 1

14 pages, 330 KB  
Article
Non-Zero Sum Nash Game for Discrete-Time Infinite Markov Jump Stochastic Systems with Applications
by Yueying Liu, Zhen Wang and Xiangyun Lin
Axioms 2023, 12(9), 882; https://doi.org/10.3390/axioms12090882 - 15 Sep 2023
Cited by 4 | Viewed by 1487
Abstract
This paper is to study finite horizon linear quadratic (LQ) non-zero sum Nash game for discrete-time infinite Markov jump stochastic systems (IMJSSs). Based on the theory of stochastic analysis, a countably infinite set of coupled generalized algebraic Riccati equations are solved and a [...] Read more.
This paper is to study finite horizon linear quadratic (LQ) non-zero sum Nash game for discrete-time infinite Markov jump stochastic systems (IMJSSs). Based on the theory of stochastic analysis, a countably infinite set of coupled generalized algebraic Riccati equations are solved and a necessary and sufficient condition for the existence of Nash equilibrium points is obtained. From a new perspective, the finite horizon mixed robust H2/H control is investigated, and summarize the relationship between Nash game and H2/H control problem. Moreover, the feasibility and validity of the proposed method has been proved by applying it to a numerical example. Full article
(This article belongs to the Special Issue Advances in Analysis and Control of Systems with Uncertainties II)
Back to TopTop