A Strength Allocation Bayesian Game Method for Swarming Unmanned Systems
Abstract
Highlights
- A swarming strength allocation Bayesian game model under incomplete information is established, addressing the limitations of prior complete information game models and enabling optimal strength allocation for protecting high-value targets.
- An improved Lanchester equation-based benefit quantification method is proposed to predict swarming strength attrtition without being restriced by the number of agents, and a Bayesian Nash equilibrium solving algorithm with defense effectiveness is designed to improve the efficiency and operability of selecting strategies.
- Provides a practical solution for high-value targets protection by swarming unmanned systems under incomplete information, optimizing resource utilization, and reducing attrition.
- Proposes an optimal strategy for strength allocation using Bayesian game theory, enabling decision makers to select specific and executable strategies, and provides theoretical support.
Abstract
1. Introduction
- This study explores the mechanism of swarming strength allocation under incomplete information by introducing the Harsanyi transformation. Based on this, the SSABG model is presented to allocate strength to the swarming protection process. In contrast with prior research in [33], which assumes complete information. This study solved the problem of the opposing players being unable to acquire complete information.
- The benefit quantification method is proposed using the Lanchester equation. It can predict swarming strength attrition by regarding swarming groups with the same function as a whole unit. Compared with the previous work in [32] based on the agent perspective, this method can predict swarming strength without limits on the number of individuals. Additionally, the fuel budget and time cost are taken into account in the benefit quantification.
- A swarming strength allocation algorithm is designed by introducing the mixed-strategy Bayesian Nash equilibrium (BNE). This algorithm comprises three components: dominant-strategy pre-judgment, pure-strategy Nash equilibrium (NE) calculation, and mixed-strategy BNE calculation. By pre-judging the dominant strategy, this approach reduces the algorithm’s computational complexity and enhances computational efficiency. Additionally, a strategy’s effectiveness calculation method is proposed, which can convert the probability distribution of the mixed-strategy BNE into executable specific strategies, thus avoiding the problem of low operability where strategies are merely expressed in probabilistic terms and practically not selectable.
2. SSABG Mechanism Analysis and Scenario Illustration
2.1. SSABG Mechanism Analysis
2.2. High-Value Targets Protection Scenario Illustration
3. Main Results
3.1. Swarming Strength Allocation Bayesian Game Model
- represents the set of players, with as A and as D.
- represents the set of players’ types, including =, , …, for A and =, , …, for D.
- represents the set of strategies, with =, , …, as A’s strength allocation strategies and =, , …, as D’s strength allocation strategies.
- represents the set of prior beliefs. For all and , =, with (A’s prior beliefs), and with (D’s prior beliefs).
- represents the set of benefit function. For all , , , and , is A’s benefit when j-th A uses strategies against D’s strategies , and is D’s benefit when the i-th uses strategies against A’s strategies .
3.2. Swarming Strength Allocation Benefit Quantification Model
3.3. The Swarming Strength Allocation Algorithm Based on Nash Equilibrium
Algorithm 1 Swarming strength allocation algorithm based on Bayesian games |
Input: model |
Output: Swarming strength allocation strategies. |
1: Initialize . |
2: for Each player type in [A, D] do |
3: Establish the type set |
4: Establish the strategy set |
5: Determine the type of the player ← via Harsanyi transformation |
6: end for |
7: for each strategy pair do |
8: for each player type in [A, D] do |
9: Calculate the player’s cost and benefit for the current strategy pair |
10: end for |
11: end for |
12: Generate game benefit value matrix U. |
13: Identify dominant strategies from U. |
14: if U has dominant strategies then |
15: Call the dominant strategy solving sub-algorithm . |
16: else |
17: Call the mixed-strategy solving sub-algorithm . |
18: end if |
19: for Each player type in [A, D] do |
20: Evaluate and rank the player’s strategies |
21: end for |
22: return Swarming strength allocation strategies results. |
Algorithm 2 Pure-strategy NE solving sub-algorithm |
Input: model |
Output: Swarming strength allocation strategies.
|
Algorithm 3 Mixed-strategy BNE solving sub-algorithm |
|
3.4. Methods for Comparison
- Random allocation (RA) [41]: a random strength allocation method, making A and D randomly select an allocation strategy for interception or protection.
- Greedy heuristic (GH) [41]: a greedy heuristics strength allocation method, making A and D always select an allocation strategy with the highest benefit.
- Rule-based assignment (RBA) [42]: a rule-based strength allocation method, making A and D select an allocation strategy with certain rules.
- Colonel Blotto game (CBG) [28]: a game theory-based allocation method, making A and D allocate strength at NE or BNE across targets.
4. Numerical Simulations and Result Analysis
4.1. Simulation Parameter Settings
4.2. Simulation Results
4.2.1. Comparison of Increasing Strength Scale
- Case #1: Symmetric and small-scale protection scenario. A’s and D’s initial strength is equal, that is, and .
- Case #2: Defender-disadvantage scenario. A’s and D’s initial strength is equal, that is, and .
- Case #3: Defender-advantage scenario. A’s and D’s initial strength is equal, that is, and .
- Case #4: Large-scale protection scenario. A’s and D’s initial strength is equal, that is, and .
4.2.2. Comparison of Different Prior Probabilities
4.2.3. Comparison of Execution Time
4.3. Results Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
DURC Statement
Conflicts of Interest
Appendix A. Game Benefits and Attrition Rate in Cases #1–#4
[−41.74, −18.26] | [−31.74, 60.74] | [−46.52, −14.48] | [−25.55, 61.55] | |
[−45.15, 38.15] | [−50.16, 34.16] | [−41.83, −36.17] | [−40.85, 158.85] | |
[−307.02, 231.02] | [−292.98, 243.98] | [−134.78, 97.78] | [−128.17, 135.17] | |
[−381.16, 207.16] | [−380.11, 216.11] | [−199.55, 116.55] | [−99.95, 203.95] |
[−60.12, 32.12] | [−47.14, 42.14] | [−41.11, 11.11] | [−63.14, 91.14] | |
[−55.42, −16.58] | [−66.45, 68.45] | [−58.29, 60.29] | [−70.31, 106.31] | |
[−332.31, 264.31] | [−326.01, 221.01] | [−115.12, 63.12] | [−114.1, 164.1] | |
[−406.95, 271.95] | [−416.67, 227.67] | [−203.16, 73.16] | [−94.13, 196.13] |
[−477.98, 417.98] | [−467.78, 496.78] | [132.38, −193.38] | [152.2, −116.2] | |
[−482.52, 475.52] | [−487.3, 471.3] | [135.72, −213.72] | [136.68, −18.68] | |
[−561.58, 485.58] | [−547.57, 498.57] | [−311.48, 274.48] | [−304.54, 311.54] | |
[−635.66, 461.66] | [−634.65, 470.65] | [−377.38, 294.38] | [−277.46, 381.46] |
[136.3, −164.3] | [149.26, −154.26] | [166.25, −196.25] | [144.22, −116.22] | |
[140.46, −212.46] | [129.44, −127.44] | [148.98, −146.98] | [136.96, −100.96] | |
[−564.88, 496.88] | [−558.75, 453.75] | [−15.65, −36.35] | [−15.62, 65.62] | |
[−638.31, 503.31] | [−648.17, 459.17] | [−105.21, −24.79] | [3.94, 98.06] |
[304.64, −364.64] | [314.59, −285.59] | [342.4, −403.4] | [363.37, −327.37] | |
[299.65, −306.65] | [294.6, −310.6] | [347.1, −425.1] | [348.08, −230.08] | |
[−645.52, 569.52] | [−631.42, 582.42] | [53.19, −90.19] | [59.88, −52.88] | |
[−719.78, 545.78] | [−718.68, 554.68] | [−11.27, −71.73] | [88.35, 15.65] |
[344.95, −372.95] | [357.93, −362.93] | [378.93, −408.93] | [356.92, −328.92] | |
[349.65, −421.65] | [338.62, −336.62] | [361.78, −359.78] | [349.76, −313.76] | |
[−577.88, 509.88] | [−571.5, 466.5] | [193.87, −245.87] | [194.88, −144.88] | |
[−652.21, 517.21] | [−661.84, 472.84] | [105.58, −235.58] | [214.6, −112.6] |
[731.08, −791.08] | [740.97, −711.97] | [813.41, −874.41] | [834.41, −798.41] | |
[726.32, −733.32] | [721.21, −737.21] | [818.1, −896.1] | [819.09, −701.09] | |
[−1052.69, 976.69] | [−1038.58, 989.58] | [272.33, −309.33] | [279.03, −272.03] | |
[−1127.33, 953.33] | [−1126.21, 962.21] | [208.44, −291.44] | [307.36, −203.36] |
[834.83, −862.83] | [847.79, −852.79] | [882.7, −912.7] | [860.69, −832.69] | |
[839.53, −911.53] | [828.48, −826.48] | [865.54, −863.54] | [853.53, −817.53] | |
[−852.98, 784.98] | [−846.39, 741.39] | [568.74, −620.74] | [569.76, −519.76] | |
[−928.63, 793.63] | [−938.05, 749.05] | [480.18, −610.18] | [589.2, −487.2] |
[0.17, 1] | [0.17, 1] | [0.17, 1] | [0.17, 1] | |
[0.18, 1] | [0.18, 1] | [0.17, 1] | [0.17, 1] | |
[1, 0.14] | [1, 0.14] | [0.42, 1] | [0.41, 1] | |
[1, 0.14] | [1, 0.14] | [0.44, 1] | [0.43, 1] |
[0.27, 1] | [0.27, 1] | [0.26, 1] | [0.26, 1] | |
[0.27, 1] | [0.27, 1] | [0.26, 1] | [0.26, 1] | |
[1, 0.14] | [1, 0.14] | [0.12, 1] | [0.12, 1] | |
[1, 0.15] | [1, 0.15] | [0.13, 1] | [0.13, 1] |
[1, 0.16] | [1, 0.16] | [0.1, 1] | [0.11, 1] | |
[1, 0.16] | [1, 0.16] | [0.11, 1] | [0.11, 1] | |
[1, 0.03] | [1, 0.03] | [0.83, 0.54] | [0.83, 0.54] | |
[1, 0.03] | [1, 0.03] | [0.83, 0.53] | [0.83, 0.54] |
[0.02, 1] | [0.02, 1] | [0.01, 1] | [0.01, 1] | |
[0.02, 1] | [0.02, 1] | [0.01, 1] | [0.01, 1] | |
[1, 0.09] | [1, 0.09] | [0.43, 1] | [0.44, 1] | |
[1, 0.09] | [1, 0.09] | [0.44, 1] | [0.44, 1] |
[0.16, 1] | [0.16, 1] | [0.08, 1] | [0.08, 1] | |
[0.16, 1] | [0.16, 1] | [0.08, 1] | [0.08, 1] | |
[1, 0.15] | [1, 0.15] | [0.55, 1] | [0.55, 1] | |
[1, 0.15] | [1, 0.15] | [0.56, 1] | [0.55, 1] |
[0.03, 1] | [0.03, 1] | [0.01, 1] | [0.01, 1] | |
[0.03, 1] | [0.03, 1] | [0.01, 1] | [0.01, 1] | |
[1, 0.32] | [1, 0.32] | [0.28, 1] | [0.28, 1] | |
[1, 0.32] | [1, 0.32] | [0.28, 1] | [0.28, 1] |
[0.18, 1] | [0.18, 1] | [0.09, 1] | [0.09, 1] | |
[0.18, 1] | [0.18, 1] | [0.1, 1] | [0.1, 1] | |
[1, 0.18] | [1, 0.18] | [0.57, 1] | [0.57, 1] | |
[1, 0.18] | [1, 0.18] | [0.57, 1] | [0.57, 1] |
[0.05, 1] | [0.05, 1] | [0.03, 1] | [0.03, 1] | |
[0.05, 1] | [0.05, 1] | [0.03, 1] | [0.03, 1] | |
[1, 0.4] | [1, 0.4] | [0.29, 1] | [0.29, 1] | |
[1, 0.39] | [1, 0.39] | [0.29, 1] | [0.29, 1] |
References
- Zheng, Z.; Wei, C.; Duan, H. UAV swarm air combat maneuver decision-making method based on multi-agent reinforcement learning and transferring. Sci. China Inf. Sci. 2024, 67, 180204. [Google Scholar] [CrossRef]
- Zhao, P.; Wang, J.; Kong, L. Decentralized Algorithms for Weapon-Target Assignment in Swarming Combat System. Math. Probl. Eng. 2019, 2019, 8425403. [Google Scholar] [CrossRef]
- Zhang, J.; Han, K.; Zhang, P.; Hou, Z.; Ye, L. A survey on joint-operation application for unmanned swarm formations under a complex confrontation environment. J. Syst. Eng. Electron. 2023, 34, 1432–1446. [Google Scholar] [CrossRef]
- Hayat, S.; Yanmaz, E.; Muzaffar, R. Survey on unmanned aerial vehicle networks for civil applications: A communications viewpoint. IEEE Commun. Surv. Tutorials 2016, 18, 2624–2661. [Google Scholar] [CrossRef]
- Rosalie, M.; Danoy, G.; Chaumette, S.; Bouvry, P. Chaos-enhanced mobility models for multilevel swarms of UAVs. Swarm Evol. Comput. 2018, 41, 36–48. [Google Scholar] [CrossRef]
- Zhang, B.; Liao, J.; Kuang, Y.; Zhang, M.; Zhou, S.; Kang, Y. Research status and development of the United States UAV swarm battlefield. Aero Weapon. 2020, 27, 7–12. [Google Scholar]
- Wang, K.; Wang, D.; Meng, Q. Overview on the development of United States unmanned aerial vehicle cluster combat system. In Proceedings of the 2024 International Conference on Unmanned Aircraft Systems, Chania, Greece, 4–7 June 2024; pp. 92–95. [Google Scholar]
- Zhang, C.; Liu, T.; Bai, G.; Tao, J.; Zhu, W. A dynamic resilience evaluation method for cross-domain swarms in confrontation. Reliab. Eng. Syst. Saf. 2024, 244, 109904. [Google Scholar] [CrossRef]
- Xing, D.; Zhen, Z.; Gong, H. Offense-defense confrontation decision making for dynamic UAV swarm versus UAV swarm. Proc. Inst. Mech. Eng. Part G J. Aerosp. Eng. 2019, 233, 5689–5702. [Google Scholar] [CrossRef]
- Li, Y.; Liu, Z.; Hong, Y.; Wang, J.; Wang, J.; Li, Y.; Tang, Y. Multi-agent reinforcement learning based game: A survey. Acta Autom. Sin. 2025, 51, 540–558. [Google Scholar]
- Lima Filho, G.M.d.; Medeiros, F.L.L.; Passaro, A. Decision support system for unmanned combat air vehicle in beyond visual range air combat based on artificial neural networks. J. Aerosp. Technol. Manag. 2021, 13, e3721. [Google Scholar] [CrossRef]
- Jian, J.; Chen, Y.; Li, Q.; Li, H.; Zheng, X.; Han, C. Decision-Making Method of Multi-UAV Cooperate Air Combat Under Uncertain Environment. IEEE J. Miniaturization Air Space Syst. 2024, 5, 138–148. [Google Scholar] [CrossRef]
- Hao, X.; Fang, Z.; Zhang, J.; Deng, F.; Jiang, A.; Xiao, S. Reinforcement Model for Unmanned Combat System of Systems Based on Multi-Layer Grey Target. J. Grey Syst. 2024, 36, 54–66. [Google Scholar]
- Luo, B.; Hu, T.; Zhou, Y.; Huang, T.; Yang, C.; Gui, W. Survey on multi-agent reinforcement learning for control and decision-making. Acta Autom. Sin. 2025, 51, 510–539. [Google Scholar]
- Sha, J. Mathematic Tactics; Science Press: Beijing, China, 2003. [Google Scholar]
- Chen, X.; Cao, J.; Zhao, F.; Jiang, X. Nash equilibrium analysis of hybrid dynamic games system based on event-triggered control. Control Theory Appl. 2021, 38, 1801–1808. [Google Scholar]
- Chen, X.; Wang, D.; Zhao, F.; Guo, M.; Qiu, J. A Viewpoint on Construction of Networked Model of Event-triggered Hybrid Dynamic Games. In Proceedings of the 2022 IEEE Conference on Games (CoG), Beijing, China, 21–24 August 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 473–477. [Google Scholar]
- Elliott, D.S.; Vatsan, M. Efficient Computation of Weapon-Target Assignments Using Abstraction. IEEE Control Syst. Lett. 2023, 7, 3717–3722. [Google Scholar] [CrossRef]
- Nam Eung, H.; Hyung Jun, K. Static Weapon-Target Assignment Based on Battle Probabilities and Time-Discounted Reward. Def. Sci. J. 2024, 74, 662–670. [Google Scholar] [CrossRef]
- Silav, A.; Karasakal, E.; Karasakal, O. Bi-objective dynamic weapon-target assignment problem with stability measure. Ann. Oper. Res. 2022, 311, 1229–1247. [Google Scholar] [CrossRef]
- Wang, Z.; Lu, Y.; Li, X.; Li, Z. Optimal defense strategy selection based on the static Bayesian game. J. Xidian Univ. 2019, 46, 55–61. [Google Scholar]
- Liu, X.; Zhang, H.; Ma, J.; Tan, J. Research review of network defense decision-making methods based on attack and defense game. Chin. J. Netw. Inf. Secur. 2022, 8, 1–14. [Google Scholar]
- Liu, X.; Zhang, H.; Zhang, Y.; Hu, H.; Cheng, J. Modeling of network attack and defense behavior and analysis of situation evolution based on game theory. J. Electron. Inf. Technol. 2021, 43, 3629–3638. [Google Scholar]
- Kline, A.; Ahner, D.; Hill, R. The weapon-target assignment problem. Comput. Oper. Res. 2019, 105, 226–236. [Google Scholar] [CrossRef]
- He, W.; Tan, J.; Guo, Y.; Shang, K.; Zhang, H. A Deep Reinforcement Learning-Based Deception Asset Selection Algorithm in Differential Games. IEEE Trans. Inf. Forensics Secur. 2024, 19, 8353–8368. [Google Scholar] [CrossRef]
- Deng, L.; Wu, J.; Shi, J.; Xia, J.; Liu, Y.; Yu, X. Research on intelligent decision technology for multi-UAVs prevention and control. In Proceedings of the 2020 Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 5362–5367. [Google Scholar]
- Xuan, S.; Ke, L. UAV swarm attack-defense confrontation based on multi-agent reinforcement learning. In Advances in Guidance, Navigation and Control: Proceedings of the 2020 International Conference on Guidance, Navigation and Control, ICGNC 2020, Tianjin, China, 23–25 October 2020; Springer: Berlin/Heidelberg, Germany, 2021; pp. 5599–5608. [Google Scholar]
- Ji, X.; Zhang, W.; Xiang, F.; Yuan, W.; Chen, J. A swarm confrontation method based on Lanchester law and Nash equilibrium. Electronics 2022, 11, 896. [Google Scholar] [CrossRef]
- Chi, S.; Li, S.; Wang, C.; Xie, G. A review of research on pursuit-evasion games. Acta Electron. Sin. 2025, 51, 705–726. [Google Scholar]
- Majumder, R.; Ghose, D. A strategic decision support system using multiplayer non-cooperative games for resource allocation after natural disasters. IEEE Trans. Autom. Sci. Eng. 2022, 20, 2227–2240. [Google Scholar] [CrossRef]
- Yi, N.; Wang, Q.; Yan, L.; Tang, Y.; Xu, J. A multi-stage game model for the false data injection attack from attacker’s perspective. Sustain. Energy Grids Netw. 2021, 28, 100541. [Google Scholar] [CrossRef]
- Wei, N.; Liu, M.; Cheng, W. Decision-making of underwater cooperative confrontation based on MODPSO. Sensors 2019, 19, 2211. [Google Scholar] [CrossRef]
- Li, L.; Xiao, B.; Wu, X. Optimal control of strength allocation strategies generation with complex constraints. Trans. Inst. Meas. Control 2025, 47, 634–646. [Google Scholar] [CrossRef]
- Jiang, L.; Zhang, H.; Wang, J. Optimal strategy selection method for moving target defense based on signaling game. J. Commun. 2019, 40, 128–137. [Google Scholar]
- Liu, L.; Zhang, Q.; Zhao, Z. Operational conception and key technologies of unmanned aerial vehicle swarm interception system. Command Control Simul. 2021, 43, 48–54. [Google Scholar]
- Lei, C.; Zhang, H.; Wan, L.; Liu, L.; Ma, D. Incomplete information Markov game theoretic approach to strategy generation for moving target defense. Comput. Commun. 2018, 116, 184–199. [Google Scholar] [CrossRef]
- Kanakia, A.; Touri, B.; Correll, N. Modeling multi-robot task allocation with limited information as global game. Swarm Intell. 2016, 10, 147–160. [Google Scholar] [CrossRef]
- Reesman, R.; Wilson, J.R. The Physics of Space War: How Orbital Dynamics Constrain Space-to-Space Engagements; Aerospace Corporation: Singapore, 2020. [Google Scholar]
- Klinkova, G.; Grabinski, M. A Statistical Analysis of Games with No Certain Nash Equilibrium Make Many Results Doubtful. Appl. Math. 2022, 13, 120–130. [Google Scholar] [CrossRef]
- Wang, J.; Yu, D.; Zhang, H.; Wang, N. Active defense strategy selection based on the static Bayesian game. J. Xidian Univ. 2016, 43, 144–150. [Google Scholar]
- Yang, Z.; Chen, Y.p.; Gu, F.; Wang, J.; Zhao, L. A Two-Way Update Resource Allocation Strategy in Mobile Edge Computing. Wirel. Commun. Mob. Comput. 2022, 2022, 1117597. [Google Scholar] [CrossRef]
- Yu, X.; Khani, A.; Chen, J.; Xu, H.; Mao, H. Real-time holding control for transfer synchronization via robust multiagent reinforcement learning. IEEE Trans. Intell. Transp. Syst. 2022, 23, 23993–24007. [Google Scholar] [CrossRef]
- Clausewitz. On War; Oxford University Press Inc.: Oxford, UK, 2007; Chapter 2; pp. 147–148. [Google Scholar]
Stage | Description | Content |
---|---|---|
Stage #1 | Introduce “Nature” as a virtual player |
|
Stage #2 | Determine players’ types |
|
Stage #3 | Select strategies simultaneously |
|
Stage #4 | Solve game equilibrium |
|
Assumption | Description |
---|---|
Rational decision making [36] | Both A and D are fully rational, with no cognitive limitations, and make decisions to maximize their benefits. |
Information asymmetry [36] | Neither side can precisely know the other’s strategy benefits, but can infer the opponent’s type through probability distributions, converting strategy uncertainty into players’ type uncertainty. |
Communication perfect [37] | Within the swarming unmanned system, inter-agent communication is perfect. Every unit has complete knowledge of its state. Each unit’s type remains invariant throughout the interaction, while its strength evolves. Moreover, no random factors are involved in SUS. |
Constraints | Description |
---|---|
Strength constraints | During the protection process, the strength of A and D is limited by finite fuel and time, and is not unlimited. |
Time constraints | A and D must complete their tasks within a certain time limit. |
Player | ||
---|---|---|
A | [71, 45, 78, 101] | [110, 159, 121, 195] |
D | [65, 15, 60, 86] | [75, 142, 114, 169] |
Methods | Case #1 | Case #2 |
---|---|---|
RA | : (1,0,0,0) | : (1,0,0,0) |
: (0,1,0,0) | : (0,1,0,0) | |
: (1,0,0,0) | : (1,0,0,0) | |
GH | : (0,0,0,1) | : (0,0,0,1) |
: (0,0,0,1) | : (0,1,0,0) | |
: (1,0,0,0) | : (1,0,0,0) | |
RBA | : (0,1,0,0) | : (0,1,0,0) |
: (0,0,0,1) | : (0,1,0,0) | |
: (1,0,0,0) | : (1,0,0,0) | |
CBG | : (0.25,0.25,0.25,0.25) | : (0.45,0.45,0.1,0) |
: (0.25,0.25,0.25,0.25) | : (0.25,0.25,0.25,0.25) | |
: (0,1,0,0) | : (1,0,0,0) | |
SSABG | : (0,0,1,0) | : (0,0,1,0) |
: (0,0,1,0) | : (0,0,0.75,0.25) | |
: (1,0,0,0) | : (1,0,0,0) |
Methods | Case #3 | Case #4 |
---|---|---|
RA | : (1,0,0,0) | : (1,0,0,0) |
: (0,1,0,0) | : (0,1,0,0) | |
: (1,0,0,0) | : (1,0,0,0) | |
GH | : (0,0,0,1) | : (0,0,0,1) |
: (0,1,0,0) | : (0,1,0,0) | |
: (1,0,0,0) | : (1,0,0,0) | |
RBA | : (0,1,0,0) | : (0,1,0,0) |
: (0,1,0,0) | : (0,1,0,0) | |
: (1,0,0,0) | : (1,0,0,0) | |
CBG | : (0.25,0.25,0.25,0.25) | : (0.25,0.25,0.25,0.25) |
: (0.25,0.25,0.25,0.25) | : (0.25,0.25,0.25,0.25) | |
: (0,1,0,0) | : (0,1,0,0) | |
SSABG | : (0.06,0.47,0.47,0) | : (0.2,0,0.4,0.4) |
: (0,0.28,0.36,0.36) | : (0.13,0.29,0.29,0.29) | |
: (0.42,0.58,0,0) | : (0.42,0.58,0,0) |
Methods | Model | Scalability | Computational | General |
---|---|---|---|---|
Complexity | Requirements | Applicability | ||
RA | Low | Low | High | Low |
GH | Low | Medium | Low | Low |
RBA | Medium | Medium | Low | Medium |
CBG | Medium | Medium | Medium | Medium |
SSABG | High | High | Medium | High |
Methods | Assumptions | Constraints | Limitations |
---|---|---|---|
RA | Random strategy selection | None | Unable to adapt to scenario change |
GH | Strategy selection with the highest benefit | None | Disregards opponent’s strategies |
RBA | Strategy selection based on predefined fixed rules | None | Inflexible for dynamic scenarios due to high migration cost |
CBG | Complete information elimination-based selection | None | Unable to handle incomplete information scenarios size |
SSABG | Incomplete information, rational decision making, communication perfect | Total strength, engagement time | Performance is not obvious in small-scale scenarios |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, L.; Ren, B. A Strength Allocation Bayesian Game Method for Swarming Unmanned Systems. Drones 2025, 9, 626. https://doi.org/10.3390/drones9090626
Li L, Ren B. A Strength Allocation Bayesian Game Method for Swarming Unmanned Systems. Drones. 2025; 9(9):626. https://doi.org/10.3390/drones9090626
Chicago/Turabian StyleLi, Lingwei, and Bangbang Ren. 2025. "A Strength Allocation Bayesian Game Method for Swarming Unmanned Systems" Drones 9, no. 9: 626. https://doi.org/10.3390/drones9090626
APA StyleLi, L., & Ren, B. (2025). A Strength Allocation Bayesian Game Method for Swarming Unmanned Systems. Drones, 9(9), 626. https://doi.org/10.3390/drones9090626