SCNOC-Agentic: A Network Operation and Control Agentic for Satellite Communication Systems
Abstract
1. Introduction
- We propose the SCNOC-Agentic framework for the operation and control of satellite communications, firstly marking the implementation of the LLMs agent architecture into this domain, which enhances the capability for rapid and agile response.
- In SCNOC-Agentic, we introduce four components specifically designed for the characteristics of the satellite communication field: intent refinement, multi-agent workflow, personalized long-term memory, and graph-based retrieval. The performance influence of each component is evaluated through ablation experiments.
- We design four typical scenarios for applying LLMs in satellite communications: network task planning, carrier and cell optimization, fault analysis of satellites, and satellite management and control. The integration of the SCNOC-Agentic framework within these scenarios is elaborated. Comparative experiments against current state-of-the-art general models and agents show that SCNOC-Agentic achieves superior performance across these four scenarios.
2. Related Work
2.1. AI-Agentic
2.2. LLM for Communication
3. Overview of SCNOC-Agentic
3.1. Intent Refinement
3.2. Multi-Agent Workflow
3.3. Personalized Long-Term Memory
3.4. Graph-Based Retrieval
4. Scene and Problem Formulation
4.1. Network Task Planning
4.2. Carrier and Cell Optimization
4.3. Fault Analysis of Satellite
4.4. Satellite Management and Control
5. Experiments
5.1. Main Results
5.2. Components Ablation Study
5.3. Discussion and Analysis
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Gao, H.; Cao, R.; Xu, W.; Yuan, C.; Chen, H.H. Space-Air-Ground Integrated Networks with Task-Driven Connected Intelligence. IEEE Wirel. Commun. 2024, 32, 254–261. [Google Scholar] [CrossRef]
- Zhang, R.; Du, H.; Niyato, D.; Kang, J.; Xiong, Z.; Jamalipour, A.; Zhang, P.; Kim, D.I. Generative AI for Space-Air-Ground Integrated Networks. IEEE Wirel. Commun. 2024, 31, 10–20. [Google Scholar] [CrossRef]
- Tang, J.; Tang, F.; Long, S.; Zhao, M.; Kato, N. Utilizing Large Language Models for Advanced Optimization and Intelligent Management in Space-Air-Ground Integrated Networks. IEEE Netw. 2024, 1. [Google Scholar] [CrossRef]
- Cui, H.; Zhang, J.; Geng, Y.; Xiao, Z.; Sun, T.; Zhang, N.; Liu, J.; Wu, Q.; Cao, X. Space-air-ground integrated network (SAGIN) for 6G: Requirements, architecture and challenges. China Commun. 2022, 19, 90–108. [Google Scholar] [CrossRef]
- Kan, K.B.; Mun, H.; Cao, G.; Lee, Y. Mobile-LLaMA: Instruction Fine-Tuning Open-Source LLM for Network Analysis in 5G Networks. IEEE Netw. 2024, 38, 76–83. [Google Scholar] [CrossRef]
- Zhou, H.; Hu, C.; Yuan, Y.; Cui, Y.; Jin, Y.; Chen, C.; Wu, H.; Yuan, D.; Jiang, L.; Wu, D.; et al. Large language model (llm) for telecommunications: A comprehensive survey on principles, key techniques, and opportunities. arXiv 2024, arXiv:2405.10825. [Google Scholar] [CrossRef]
- Miraz, M.H.; Ali, M.; Excell, P.S.; Picking, R. A review on Internet of Things (IoT), Internet of everything (IoE) and Internet of nano things (IoNT). In Proceedings of the 2015 Internet Technologies and Applications (ITA), Wrexham, UK, 8–11 September 2015; pp. 219–224. [Google Scholar] [CrossRef]
- Sun, G.; Ayepah-Mensah, D.; Maale, G.T.; Omer, M.B.; Kuadey, N.A.; Kwantwi, T.; Liu, Y.; Liu, G. Toward AI-Native Task Orchestration for Collaborative Computing in SAGSINs. IEEE Commun. Mag. 2025, 1–7. [Google Scholar] [CrossRef]
- Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. Gpt-4 technical report. arXiv 2023, arXiv:2303.08774. [Google Scholar] [CrossRef]
- Floridi, L.; Chiriatti, M. GPT-3: Its nature, scope, limits, and consequences. Minds Mach. 2020, 30, 681–694. [Google Scholar] [CrossRef]
- Bai, J.; Bai, S.; Chu, Y.; Cui, Z.; Dang, K.; Deng, X.; Fan, Y.; Ge, W.; Han, Y.; Huang, F.; et al. Qwen technical report. arXiv 2023, arXiv:2309.16609. [Google Scholar] [CrossRef]
- Yang, A.; Yang, B.; Zhang, B.; Hui, B.; Zheng, B.; Yu, B.; Li, C.; Liu, D.; Huang, F.; Wei, H.; et al. Qwen2.5 technical report. arXiv 2024, arXiv:2412.15115. [Google Scholar] [CrossRef]
- Zhang, S.; Fu, D.; Liang, W.; Zhang, Z.; Yu, B.; Cai, P.; Yao, B. Trafficgpt: Viewing, processing and interacting with traffic foundation models. Transp. Policy 2024, 150, 95–105. [Google Scholar] [CrossRef]
- de Zarzà, I.; de Curtò, J.; Roig, G.; Calafate, C.T. LLM multimodal traffic accident forecasting. Sensors 2023, 23, 9225. [Google Scholar] [CrossRef]
- Liu, C.; Yang, S.; Xu, Q.; Li, Z.; Long, C.; Li, Z.; Zhao, R. Spatial-temporal large language model for traffic prediction. arXiv 2024, arXiv:2401.10134. [Google Scholar] [CrossRef]
- Wei, Y.; Xie, X.; Zuo, Y.; Hu, T.; Chen, X.; Chi, K.; Cui, Y. Leveraging LLM Agents for Translating Network Configurations. arXiv 2025, arXiv:2501.08760. [Google Scholar] [CrossRef]
- Wu, D.; Wang, X.; Qiao, Y.; Wang, Z.; Jiang, J.; Cui, S.; Wang, F. Netllm: Adapting large language models for networking. In Proceedings of the ACM SIGCOMM 2024 Conference, Sydney, Australia, 4–8 August 2024; pp. 661–678. [Google Scholar] [CrossRef]
- He, Z.; Gottipati, A.; Qiu, L.; Yan, F.Y.; Luo, X.; Xu, K.; Yang, Y. LLM-ABR: Designing Adaptive Bitrate Algorithms via Large Language Models. arXiv 2024, arXiv:2404.01617. [Google Scholar] [CrossRef]
- Wang, T.; Xie, X.; Zhang, L.; Wang, C.; Zhang, L.; Cui, Y. Shieldgpt: An llm-based framework for ddos mitigation. In Proceedings of the 8th Asia-Pacific Workshop on Networking, Sydney, Australia, 3–4 August 2024; pp. 108–114. [Google Scholar] [CrossRef]
- Bonnet, J.; Gleizes, M.P.; Kaddoum, E.; Rainjonneau, S.; Flandin, G. Multi-satellite mission planning using a self-adaptive multi-agent system. In Proceedings of the 2015 IEEE 9th International Conference on Self-adaptive and Self-organizing Systems, IEEE, Cambridge, MA, USA, 21–25 September 2015; pp. 11–20. [Google Scholar] [CrossRef]
- Hilton, S.; Thangavel, K.; Gardi, A.; Sabatini, R. Intelligent mission planning for autonomous distributed satellite systems. Acta Astronaut. 2024, 225, 857–869. [Google Scholar] [CrossRef]
- Luis, J.J.G.; Crawley, E.; Cameron, B. Applicability and challenges of deep reinforcement learning for satellite frequency plan design. In Proceedings of the 2021 IEEE Aerospace Conference (50100), IEEE, Virtual, 6–13 March 2021; pp. 1–11. [Google Scholar] [CrossRef]
- Ma, T.; Qian, B.; Qin, X.; Liu, X.; Zhou, H.; Zhao, L. Satellite-terrestrial integrated 6G: An ultra-dense LEO networking management architecture. IEEE Wirel. Commun. 2022, 31, 62–69. [Google Scholar] [CrossRef]
- Chen, L.; Tang, F.; Li, X.; Liu, J.; Zhu, Y.; Yu, J. Adaptive Network Management Service Based on Control Relation Graph for Software-Defined LEO Satellite Networks in 6G. IEEE Trans. Serv. Comput. 2024, 17, 3122–3139. [Google Scholar] [CrossRef]
- Huang, Y.; Jiang, X.; Chen, S.; Yang, F.; Yang, J. Pheromone incentivized intelligent multipath traffic scheduling approach for LEO satellite networks. IEEE Trans. Wirel. Commun. 2022, 21, 5889–5902. [Google Scholar] [CrossRef]
- Tao, B.; Masood, M.; Gupta, I.; Vasisht, D. Transmitting, fast and slow: Scheduling satellite traffic through space and time. In Proceedings of the 29th Annual International Conference on Mobile Computing and Networking, Madrid, Spain, 2–6 October 2023; pp. 1–15. [Google Scholar] [CrossRef]
- Fu, S.; Gao, J.; Zhao, L. Collaborative multi-resource allocation in terrestrial-satellite network towards 6G. IEEE Trans. Wirel. Commun. 2021, 20, 7057–7071. [Google Scholar] [CrossRef]
- Guerster, M.; Luis, J.J.G.; Crawley, E.; Cameron, B. Problem representation of dynamic resource allocation for flexible high throughput satellities. In Proceedings of the 2019 IEEE Aerospace Conference, Big Sky, MT, USA, 2–9 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–8. [Google Scholar] [CrossRef]
- Sun, G.; Ayepah-Mensah, D.; Boateng, G.O.; Kuadey, N.A.; Omer, M.B.; Liu, G. Holistic roadmap of trends in radio access network slicing: A survey. IEEE Commun. Mag. 2023, 61, 118–124. [Google Scholar] [CrossRef]
- Wang, E.; Li, H.; Zhang, S. Load balancing based on cache resource allocation in satellite networks. IEEE Access 2019, 7, 56864–56879. [Google Scholar] [CrossRef]
- Zhang, Z.; Zhang, A.; Li, M.; Smola, A. Automatic chain of thought prompting in large language models. arXiv 2022, arXiv:2210.03493. [Google Scholar] [CrossRef]
- Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; ichter, b.; Xia, F.; Chi, E.; Le, Q.V.; Zhou, D. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022; Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A., Eds.; Curran Associates, Inc.: New York, NY, USA, 2022; Volume 35, pp. 24824–24837. [Google Scholar]
- Chu, Z.; Chen, J.; Chen, Q.; Yu, W.; He, T.; Wang, H.; Peng, W.; Liu, M.; Qin, B.; Liu, T. A survey of chain of thought reasoning: Advances, frontiers and future. arXiv 2023, arXiv:2309.15402. [Google Scholar] [CrossRef]
- Yuan, L.; Cui, G.; Wang, H.; Ding, N.; Wang, X.; Deng, J.; Shan, B.; Chen, H.; Xie, R.; Lin, Y.; et al. Advancing llm reasoning generalists with preference trees. arXiv 2024, arXiv:2404.02078. [Google Scholar] [CrossRef]
- Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; Cao, Y. React: Synergizing reasoning and acting in language models. arXiv 2022, arXiv:2210.03629. [Google Scholar]
- Shinn, N.; Cassano, F.; Gopinath, A.; Narasimhan, K.; Yao, S. Reflexion: Language agents with verbal reinforcement learning. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 10–16 December 2024; Volume 36. [Google Scholar]
- Zhong, W.; Guo, L.; Gao, Q.; Ye, H.; Wang, Y. Memorybank: Enhancing large language models with long-term memory. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 19724–19731. [Google Scholar] [CrossRef]
- Schick, T.; Dwivedi-Yu, J.; Dessì, R.; Raileanu, R.; Lomeli, M.; Hambro, E.; Zettlemoyer, L.; Cancedda, N.; Scialom, T. Toolformer: Language models can teach themselves to use tools. Adv. Neural Inf. Process. Syst. 2023, 36, 68539–68551. [Google Scholar]
- Qu, C.; Dai, S.; Wei, X.; Cai, H.; Wang, S.; Yin, D.; Xu, J.; Wen, J.R. Tool learning with large language models: A survey. Front. Comput. Sci. 2025, 19, 198343. [Google Scholar] [CrossRef]
- Liu, X.; Yu, H.; Zhang, H.; Xu, Y.; Lei, X.; Lai, H.; Gu, Y.; Ding, H.; Men, K.; Yang, K.; et al. Agentbench: Evaluating llms as agents. arXiv 2023, arXiv:2308.03688. [Google Scholar] [CrossRef]
- Wu, Q.; Bansal, G.; Zhang, J.; Wu, Y.; Zhang, S.; Zhu, E.; Li, B.; Jiang, L.; Zhang, X.; Wang, C. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arXiv 2023, arXiv:2308.08155. [Google Scholar]
- Park, J.S.; O’Brien, J.; Cai, C.J.; Morris, M.R.; Liang, P.; Bernstein, M.S. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, San Francisco, CA, USA, 29 October–1 November 2023; pp. 1–22. [Google Scholar] [CrossRef]
- Wang, X.; Wei, J.; Schuurmans, D.; Le, Q.; Chi, E.; Narang, S.; Chowdhery, A.; Zhou, D. Self-consistency improves chain of thought reasoning in language models. arXiv 2022, arXiv:2203.11171. [Google Scholar] [CrossRef]
- Madaan, A.; Tandon, N.; Gupta, P.; Hallinan, S.; Gao, L.; Wiegreffe, S.; Alon, U.; Dziri, N.; Prabhumoye, S.; Yang, Y.; et al. Self-refine: Iterative refinement with self-feedback. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 10–16 December 2024; Volume 36. [Google Scholar]
- Du, Y.; Li, S.; Torralba, A.; Tenenbaum, J.B.; Mordatch, I. Improving factuality and reasoning in language models through multiagent debate. arXiv 2023, arXiv:2305.14325. [Google Scholar] [CrossRef]
- Lu, C.; Hu, S.; Clune, J. Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models. arXiv 2024, arXiv:2405.15143. [Google Scholar] [CrossRef]
- Zamfirescu-Pereira, J.; Wong, R.Y.; Hartmann, B.; Yang, Q. Why Johnny can’t prompt: How non-AI experts try (and fail) to design LLM prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023; pp. 1–21. [Google Scholar] [CrossRef]
- Li, L.; Zhang, Y.; Chen, L. Prompt distillation for efficient llm-based recommendation. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Eastside Rooms, UK, 21–25 October 2023; pp. 1348–1357. [Google Scholar] [CrossRef]
- Zhao, A.; Huang, D.; Xu, Q.; Lin, M.; Liu, Y.J.; Huang, G. Expel: Llm agents are experiential learners. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 19632–19642. [Google Scholar] [CrossRef]
- Song, C.H.; Wu, J.; Washington, C.; Sadler, B.M.; Chao, W.L.; Su, Y. Llm-planner: Few-shot grounded planning for embodied agents with large language models. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 2998–3009. [Google Scholar]
- Edge, D.; Trinh, H.; Cheng, N.; Bradley, J.; Chao, A.; Mody, A.; Truitt, S.; Larson, J. From local to global: A graph rag approach to query-focused summarization. arXiv 2024, arXiv:2404.16130. [Google Scholar] [CrossRef]
- Hong, S.; Zheng, X.; Chen, J.; Cheng, Y.; Wang, J.; Zhang, C.; Wang, Z.; Yau, S.K.S.; Lin, Z.; Zhou, L.; et al. Metagpt: Meta programming for multi-agent collaborative framework. arXiv 2023, arXiv:2308.00352. [Google Scholar]
- Qian, C.; Xie, Z.; Wang, Y.; Liu, W.; Dang, Y.; Du, Z.; Chen, W.; Yang, C.; Liu, Z.; Sun, M. Scaling Large-Language-Model-based Multi-Agent Collaboration. arXiv 2024, arXiv:2406.07155. [Google Scholar] [CrossRef]
- Tang, X.; Zou, A.; Zhang, Z.; Li, Z.; Zhao, Y.; Zhang, X.; Cohan, A.; Gerstein, M. Medagents: Large language models as collaborators for zero-shot medical reasoning. arXiv 2023, arXiv:2311.10537. [Google Scholar] [CrossRef]
- Li, Y.; Wang, S.; Ding, H.; Chen, H. Large language models in finance: A survey. In Proceedings of the Fourth ACM International Conference on AI in Finance, Brooklyn, NY, USA, 27–29 November 2023; pp. 374–382. [Google Scholar] [CrossRef]
- Zhang, Z.; Zhang-Li, D.; Yu, J.; Gong, L.; Zhou, J.; Liu, Z.; Hou, L.; Li, J. Simulating classroom education with llm-empowered agents. arXiv 2024, arXiv:2406.19226. [Google Scholar] [CrossRef]
- Chen, X.; Jin, Y.; Mao, X.; Wang, L.; Zhang, S.; Chen, T. RareAgents: Autonomous Multi-disciplinary Team for Rare Disease Diagnosis and Treatment. arXiv 2024, arXiv:2412.12475. [Google Scholar] [CrossRef]
- Bariah, L.; Zhao, Q.; Zou, H.; Tian, Y.; Bader, F.; Debbah, M. Large generative ai models for telecom: The next big thing? IEEE Commun. Mag. 2024, 62, 84–90. [Google Scholar] [CrossRef]
- Soman, S.; Ranjani, H.G. Observations on LLMs for telecom domain: Capabilities and limitations. In Proceedings of the Third International Conference on AI-ML Systems, Bangalore, India, 25–28 October 2023; pp. 1–5. [Google Scholar] [CrossRef]
- Maatouk, A.; Ayed, F.; Piovesan, N.; De Domenico, A.; Debbah, M.; Luo, Z.Q. Teleqna: A benchmark dataset to assess large language models telecommunications knowledge. arXiv 2023, arXiv:2310.15051. [Google Scholar] [CrossRef]
- Karim, I.; Mubasshir, K.S.; Rahman, M.M.; Bertino, E. SPEC5G: A dataset for 5G cellular network protocol analysis. arXiv 2023, arXiv:2301.09201. [Google Scholar] [CrossRef]
- Miao, Y.; Bai, Y.; Chen, L.; Li, D.; Sun, H.; Wang, X.; Luo, Z.; Ren, Y.; Sun, D.; Xu, X.; et al. An empirical study of netops capability of pre-trained large language models. arXiv 2023, arXiv:2309.05557. [Google Scholar] [CrossRef]
- Maatouk, A.; Piovesan, N.; Ayed, F.; De Domenico, A.; Debbah, M. Large Language Models for Telecom: Forthcoming Impact on the Industry. IEEE Commun. Mag. 2025, 63, 62–68. [Google Scholar] [CrossRef]
- Xu, M.; Niyato, D.; Kang, J.; Xiong, Z.; Mao, S.; Han, Z.; Kim, D.I.; Letaief, K.B. When Large Language Model Agents Meet 6G Networks: Perception, Grounding, and Alignment. IEEE Wirel. Commun. 2024, 31, 63–71. [Google Scholar] [CrossRef]
- Wang, H.; Abhashkumar, A.; Lin, C.; Zhang, T.; Gu, X.; Ma, N.; Wu, C.; Liu, S.; Zhou, W.; Dong, Y.; et al. {NetAssistant}: Dialogue Based Network Diagnosis in Data Center Networks. In Proceedings of the 21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24), Santa Clara, CA, USA, 16–18 April 2024; pp. 2011–2024. [Google Scholar]
- Chen, Y.; Xie, H.; Ma, M.; Kang, Y.; Gao, X.; Shi, L.; Cao, Y.; Gao, X.; Fan, H.; Wen, M.; et al. Automatic root cause analysis via large language models for cloud incidents. In Proceedings of the Nineteenth European Conference on Computer Systems, Athens, Greece, 22–25 April 2024; pp. 674–688. [Google Scholar] [CrossRef]
- Yuan, Y.; Ding, J.; Feng, J.; Jin, D.; Li, Y. Unist: A prompt-empowered universal model for urban spatio-temporal prediction. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Barcelona, Spain, 25–29 August 2024; pp. 4095–4106. [Google Scholar] [CrossRef]
- Cao, D.; Wu, J.; Bashir, A.K. Multimodal Large Language Models Driven Privacy-Preserving Wireless Semantic Communication in 6G. In Proceedings of the 2024 IEEE International Conference on Communications Workshops (ICC Workshops), Denver, CO, USA, 9–13 June 2024; pp. 171–176. [Google Scholar] [CrossRef]
- Wang, T.; Xie, X.; Wang, W.; Wang, C.; Zhao, Y.; Cui, Y. NetMamba: Efficient Network Traffic Classification via Pre-training Unidirectional Mamba. arXiv 2024, arXiv:2405.11449. [Google Scholar] [CrossRef]
- Rong, B.; Rutagemwa, H. Leveraging large language models for intelligent control of 6G integrated TN-NTN with IoT service. IEEE Netw. 2024, 38, 136–142. [Google Scholar] [CrossRef]
- Hao, S.; Liu, T.; Wang, Z.; Hu, Z. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. Adv. Neural Inf. Process. Syst. 2023, 36, 45870–45894. [Google Scholar]
- Pang, L.; Yang, C.; Chen, D.; Song, Y.; Guizani, M. A survey on intent-driven networks. IEEE Access 2020, 8, 22862–22873. [Google Scholar] [CrossRef]
- Li, J.; Sun, A.; Han, J.; Li, C. A survey on deep learning for named entity recognition. IEEE Trans. Knowl. Data Eng. 2020, 34, 50–70. [Google Scholar] [CrossRef]
- Garau-Luis, J.J.; Torrens, S.A.; Vila, G.C.; Pachler, N.; Crawley, E.F.; Cameron, B.G. Frequency plan design for multibeam satellite constellations using integer linear programming. IEEE Trans. Wirel. Commun. 2023, 23, 3312–3327. [Google Scholar] [CrossRef]
Telemetry Status | Fault Payload | Data Size |
---|---|---|
Normal | - | 12,124 |
Fault state 1 | main control unit | 5312 |
Fault state 2 | satellite-ground switching unit | 5769 |
Fault state 3 | modem board card unit | 4793 |
Fault state 4 | inter-satellite switching unit | 4902 |
Model | Network Task Planning | Carrier & Cell Optimization | ||||
---|---|---|---|---|---|---|
ASA | PGA | AoP | ASA | PGA | E2EL (ms) | |
General LLMs (default with zero-shot CoT) | ||||||
GPT-4o | 0.6736 | 0.2046 | 0.835 | 0.7511 | 0.5146 | 302 |
GPT-4o+CoT5 | 0.7321 | 0.2339 | 0.863 | 0.8164 | 0.5943 | 286 |
GPT-4o+SC | 0.8101 | 0.3898 | 0.903 | 0.8449 | 0.7177 | 290 |
GPT3.5 | 0.6171 | 0.1754 | 0.835 | 0.7968 | 0.4577 | 343 |
GPT3.5+CoT5 | 0.6853 | 0.1968 | 0.846 | 0.7970 | 0.4962 | 318 |
GPT3.5+SC | 0.7594 | 0.3294 | 0.887 | 0.8132 | 0.6037 | 294 |
Llama-3.3-70B-Instruct (default with zero-shot CoT) | ||||||
RawAgent | 0.6035 | 0.1559 | 0.812 | 0.6697 | 0.4608 | 346 |
RawAgent+CoT5 | 0.7029 | 0.2202 | 0.840 | 0.7784 | 0.5120 | 280 |
RawAgent+SC | 0.7769 | 0.3664 | 0.863 | 0.7943 | 0.5753 | 285 |
SCNOC-Agentic | 0.7029 | 0.3216 | 0.887 | 0.8101 | 0.6575 | 312 |
SCNOC+CoT5 | 0.7867 | 0.3762 | 0.927 | 0.8354 | 0.7335 | 260 |
SCNOC+SC | 0.8491 | 0.4619 | 0.945 | 0.8670 | 0.7683 | 277 |
Qwen-2.5-70B-Instruct (default with zero-shot CoT) | ||||||
RawAgent | 0.6346 | 0.1617 | 0.829 | 0.6840 | 0.4703 | 314 |
RawAgent+CoT5 | 0.7243 | 0.2358 | 0.843 | 0.7689 | 0.5151 | 280 |
RawAgent+SC | 0.7638 | 0.3586 | 0.872 | 0.8006 | 0.6006 | 274 |
SCNOC-Agentic | 0.7185 | 0.2904 | 0.854 | 0.8575 | 0.6797 | 298 |
SCNOC+CoT5 | 0.7828 | 0.3411 | 0.921 | 0.8829 | 0.7145 | 273 |
SCNOC+SC | 0.8276 | 0.4249 | 0.939 | 0.9082 | 0.7841 | 264 |
Model | Fault Analysis of Satellite | Satellite Management and Control | |||
---|---|---|---|---|---|
Hit@1 | Hit@3 | Hit@5 | ASA | PGA | |
General LLMs (default with zero-shot CoT) | |||||
GPT-4o | 0.190 | 0.315 | 0.725 | 0.8908 | 0.5285 |
GPT-4o+CoT5 | 0.214 | 0.360 | 0.796 | 0.9430 | 0.5761 |
GPT-4o+SC | 0.246 | 0.404 | 0.818 | 0.9791 | 0.6238 |
GPT3.5 | 0.164 | 0.287 | 0.683 | 0.7952 | 0.4809 |
GPT3.5+CoT5 | 0.182 | 0.373 | 0.742 | 0.8821 | 0.5285 |
GPT3.5+SC | 0.227 | 0.388 | 0.794 | 0.9256 | 0.5761 |
Llama-3.3-70B-Instruct (default with zero-shot CoT) | |||||
RawAgent | 0.171 | 0.269 | 0.693 | 0.8121 | 0.4285 |
RawAgent+CoT5 | 0.208 | 0.349 | 0.760 | 0.9169 | 0.5761 |
RawAgent+SC | 0.203 | 0.370 | 0.781 | 0.9343 | 0.6714 |
SCNOC-Agentic | 0.185 | 0.341 | 0.755 | 0.9430 | 0.7238 |
SCNOC+CoT5 | 0.224 | 0.404 | 0.796 | 0.9517 | 0.8714 |
SCNOC+SC | 0.253 | 0.417 | 0.827 | 0.9691 | 0.9142 |
Qwen-2.5-70B-Instruct (default with zero-shot CoT) | |||||
RawAgent | 0.177 | 0.272 | 0.684 | 0.8213 | 0.4285 |
RawAgent+CoT5 | 0.190 | 0.343 | 0.770 | 0.9082 | 0.5238 |
RawAgent+SC | 0.216 | 0.362 | 0.783 | 0.9430 | 0.6714 |
SCNOC-Agentic | 0.198 | 0.354 | 0.742 | 0.9704 | 0.7714 |
SCNOC+CoT5 | 0.238 | 0.394 | 0.805 | 0.9778 | 0.8190 |
SCNOC+SC | 0.259 | 0.410 | 0.852 | 0.9865 | 0.8667 |
Model | Network Task Planning | Carrier & Cell Optimization | ||||
---|---|---|---|---|---|---|
ASA | PGA | Ro | ASA | PGA | E2EL (ms) | |
Llama-3.3-70B-Instruct (default with zero-shot CoT) | ||||||
w/o IR | - | 0.2317 | 0.834 | - | 0.5082(↓) | 330(↓) |
w/o WAG | 0.6629 | 0.2472 | 0.841 | 0.7991 | 0.5746 | 328 |
w/o PLM | 0.6450 | 0.2191 | 0.825 | 0.7825(↓) | 0.5284 | 322 |
w/o GBR | 0.6912 | 0.2904 | 0.879 | 0.8029 | 0.6386 | 293 |
SCNOC-Agentic | 0.7029 | 0.3216 | 0.887 | 0.8101 | 0.6575 | 321 |
Qwen-2.5-70B-Instruct (default with zero-shot CoT) | ||||||
w/o IR | - | 0.2241 | 0.849 | - | 0.5664 | 293 |
w/o WAG | 0.6891 | 0.2598 | 0.831 | 0.7943 | 0.5341(↓) | 346(↓) |
w/o PLM | 0.6639 | 0.2205 | 0.847 | 0.7879(↓) | 0.5468 | 304 |
w/o GBR | 0.7084 | 0.2745 | 0.862 | 0.8354 | 0.6802(↑) | 284 |
SCNOC-Agentic | 0.7185 | 0.2904 | 0.854 | 0.8575 | 0.6797 | 298 |
Model | Fault Analysis of Satellite | Satellite Management and Control | |||
---|---|---|---|---|---|
Hit@1 | Hit@3 | Hit@5 | ASA | PGA | |
Llama-3.3-70B-Instruct (default with zero-shot CoT) | |||||
w/o IR | 0.179 | 0.303 | 0.693 | - | 0.5809(↓19.7%) |
w/o WAG | 0.184 | 0.315 | 0.719 | 0.8721(↓7.5%) | 0.6455(↓10.8%) |
w/o PLM | 0.177 | 0.297 | 0.707 | 0.8847(↓6.4%) | 0.6109(↓15.6%) |
w/o GBR | 0.190 | 0.338 | 0.736 | 0.9343(↓0.9%) | 0.6761(↓6.6%) |
SCNOC-Agentic | 0.185 | 0.341 | 0.755 | 0.9430 | 0.7238 |
Qwen-2.5-70B-Instruct (default with zero-shot CoT) | |||||
w/o IR | 0.180 | 0.301 | 0.710 | - | 0.6092(↓21.0%) |
w/o WAG | 0.187 | 0.318 | 0.706 | 0.9095(↓6.2%) | 0.6761(↓12.4%) |
w/o PLM | 0.183 | 0.306 | 0.692 | 0.8908(↓8.2%) | 0.6285(↓18.5%) |
w/o GBR | 0.195 | 0.320 | 0.714 | 0.9617(↓0.8%) | 0.7238(↓6.2%) |
SCNOC-Agentic | 0.198 | 0.354 | 0.742 | 0.9704 | 0.7714 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, W.; Sun, C.; Zhang, Y.; Yin, Z.; Kang, Z. SCNOC-Agentic: A Network Operation and Control Agentic for Satellite Communication Systems. Electronics 2025, 14, 3320. https://doi.org/10.3390/electronics14163320
Sun W, Sun C, Zhang Y, Yin Z, Kang Z. SCNOC-Agentic: A Network Operation and Control Agentic for Satellite Communication Systems. Electronics. 2025; 14(16):3320. https://doi.org/10.3390/electronics14163320
Chicago/Turabian StyleSun, Wenyu, Chenhua Sun, Yasheng Zhang, Zhan Yin, and Zhifeng Kang. 2025. "SCNOC-Agentic: A Network Operation and Control Agentic for Satellite Communication Systems" Electronics 14, no. 16: 3320. https://doi.org/10.3390/electronics14163320
APA StyleSun, W., Sun, C., Zhang, Y., Yin, Z., & Kang, Z. (2025). SCNOC-Agentic: A Network Operation and Control Agentic for Satellite Communication Systems. Electronics, 14(16), 3320. https://doi.org/10.3390/electronics14163320