Next Issue
Volume 14, July-2
Previous Issue
Volume 14, June-2
 
 

Electronics, Volume 14, Issue 13 (July-1 2025) – 240 articles

Cover Story (view full-size image): We present dielectric resonator-based microstrip filters (DRMFs) with innovative input–output coupling and tunable transmission/equalization zeros (4-2-0 and 4-0-2 configurations). Designed for space/radar applications, these filters achieve loaded QL > 3000 in the X-band—surpassing conventional microstrip filters (QL ≈ 200)—with S11 < −16.5 dB, flat S21, and >30 dB out-of-band rejection. Mechanical tuning enables precise control of coupling and frequency synthesis, while equalization zeros tailor group delay. DRMFs bridge the gap between dielectric cavity filters and microstrip technologies, offering a high-performance RF solution. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 3218 KiB  
Article
An Efficient Malware Detection Method Using a Hybrid ResNet-Transformer Network and IGOA-Based Wrapper Feature Selection
by Ali Abbas Hafeth and Abdu Ibrahim Abdullahi
Electronics 2025, 14(13), 2741; https://doi.org/10.3390/electronics14132741 - 7 Jul 2025
Viewed by 332
Abstract
The growing sophistication of malware and other cyber threats presents significant challenges for detection and prevention in modern cybersecurity systems. In this paper an efficient and novel malware classification model using the Hybrid Resnet-Transformer Network (HRT-Net) and Improved Grasshopper Optimization Algorithm (IGOA) is [...] Read more.
The growing sophistication of malware and other cyber threats presents significant challenges for detection and prevention in modern cybersecurity systems. In this paper an efficient and novel malware classification model using the Hybrid Resnet-Transformer Network (HRT-Net) and Improved Grasshopper Optimization Algorithm (IGOA) is proposed. Convolutional layers in the resnet50 model effectively extract local features from malware patterns, while the Transformer focuses on long-range dependencies and complex patterns by leveraging multi-head attention. The extracted local and global features are concatenated to create a rich feature representation, enabling precise malware detection. The Improved Grasshopper Optimization Algorithm with dynamic mutation coefficient and dynamic inertia motion weights is employed to select an optimal subset of features, reducing computational complexity and enhancing classification performance. Finally, the Ensemble Learning technique is used to robustly classify malware samples. Experimental evaluations on the Malimg dataset demonstrate the high efficiency of the proposed method, achieving an impressive accuracy of 99.77%, which shows greater efficiency compared to other recent studies. Full article
Show Figures

Figure 1

16 pages, 1103 KiB  
Article
A State Assessment Method for DC Protection Devices in Converter Station Based on Variable Weight Theory and Correlation Degree Analysis
by Qi Yang, Lei Liu, Zhuo Meng, Min Li, Zihan Zhao, Xiaopeng Li, Ke Wang, Xiangfei Yang, Qi Wang and Sheng Lin
Electronics 2025, 14(13), 2740; https://doi.org/10.3390/electronics14132740 - 7 Jul 2025
Viewed by 185
Abstract
In order to accurately grasp the operational state of DC protection devices in converter stations, a DC protection device state assessment method based on the variable weight theory and correlation degree analysis is proposed. Constructing condition assessment indicators for DC protection device of [...] Read more.
In order to accurately grasp the operational state of DC protection devices in converter stations, a DC protection device state assessment method based on the variable weight theory and correlation degree analysis is proposed. Constructing condition assessment indicators for DC protection device of converter stations containing overhaul in-formation, operation information and defect information, when the actual value of the indicator exceeds the specified range of values, the DC protection device is directly judged to be in ‘alarm’ status; when the actual value of the indicator is within the specified range of values, Analytic Hierarchy Process (AHP) and variable weight theory are combined to adjust variable weights of assessment indicators in real time. At the same time, the correlation between the indicators and each state level is calculated, and the correlation of the indicators and their corresponding weights are weighted and summed to obtain the comprehensive correlation of each state level of the DC protection device, and the correlation of each state level of the DC protection device is calculated, using the principle of maximum correlation, the DC protection device status is obtained. Example analyses show that the method is simple and easy to implement and can accurately assess the operational state of the DC protection device in converter stations. Full article
Show Figures

Figure 1

18 pages, 1214 KiB  
Article
Modeling Threat Evolution in Smart Grid Near-Field Networks
by Jing Guo, Zhimin Gu, Chao Zhou, Wei Huang and Jinming Chen
Electronics 2025, 14(13), 2739; https://doi.org/10.3390/electronics14132739 - 7 Jul 2025
Viewed by 245
Abstract
In recent years, near-field networks have become a vital part of smart grids, raising growing concerns about their security. Studying threat evolution mechanisms is key to building proactive defense systems, while early identification of threats enhances prediction and precision. Unlike traditional networks, threat [...] Read more.
In recent years, near-field networks have become a vital part of smart grids, raising growing concerns about their security. Studying threat evolution mechanisms is key to building proactive defense systems, while early identification of threats enhances prediction and precision. Unlike traditional networks, threat sources in power near-field networks are highly dynamic, influenced by physical environments, workflows, and device states. Existing models, designed for general network architectures, struggle to address the deep cyber-physical integration, device heterogeneity, and dynamic services of smart grids, especially regarding physical-layer impacts, cross-system interactions, and proprietary protocols. To overcome these limitations, this paper proposes a threat evolution framework tailored to smart grid near-field networks. A novel semi-physical simulation method is introduced, combining traditional Control Flow Graphs (CFGs) for open components with real-device interaction to capture closed-source logic and private protocols. This enables integrated cyber-physical modeling of threat evolution. Experiments in realistic simulation scenarios validate the framework’s accuracy in mapping threat propagation, evolution patterns, and impact, supporting comprehensive threat analysis and simulation. Full article
(This article belongs to the Topic Power System Protection)
Show Figures

Figure 1

32 pages, 12851 KiB  
Article
Research on Autonomous Vehicle Lane-Keeping and Navigation System Based on Deep Reinforcement Learning: From Simulation to Real-World Application
by Chia-Hsin Cheng, Hsiang-Hao Lin and Yu-Yong Luo
Electronics 2025, 14(13), 2738; https://doi.org/10.3390/electronics14132738 - 7 Jul 2025
Viewed by 342
Abstract
In recent years, with the rapid development of science and technology and the substantial improvement of computing power, various deep learning research topics have been promoted. However, existing autonomous driving technologies still face significant challenges in achieving robust lane-keeping and navigation performance, especially [...] Read more.
In recent years, with the rapid development of science and technology and the substantial improvement of computing power, various deep learning research topics have been promoted. However, existing autonomous driving technologies still face significant challenges in achieving robust lane-keeping and navigation performance, especially when transferring learned models from simulation to real-world environments due to environmental complexity and domain gaps. Many fields such as computer vision, natural language processing, and medical imaging have also accelerated their development due to the emergence of this wave, and the field of self-driving cars is no exception. The trend of self-driving cars is unstoppable. Many technology companies and automobile manufacturers have invested a lot of resources in the research and development of self-driving technology. With the emergence of different levels of self-driving cars, most car manufacturers have already reached the L2 level of self-driving classification standards and are moving towards L3 and L4 levels. This study applies deep reinforcement learning (DRL) to train autonomous vehicles with lane-keeping and navigation capabilities. Through simulation training and Sim2Real strategies, including domain randomization and CycleGAN, the trained models are evaluated in real-world environments to validate performance. The results demonstrate the feasibility of DRL-based autonomous driving and highlight the challenges in transferring models from simulation to reality. Full article
(This article belongs to the Special Issue Autonomous and Connected Vehicles)
Show Figures

Figure 1

22 pages, 3393 KiB  
Article
Stochastic Operation of BESS and MVDC Link in Distribution Networks Under Uncertainty
by Changhee Han, Sungyoon Song and Jaehyeong Lee
Electronics 2025, 14(13), 2737; https://doi.org/10.3390/electronics14132737 - 7 Jul 2025
Viewed by 199
Abstract
This study introduces a stochastic optimization framework designed to effectively manage power flows in flexible medium-voltage DC (MVDC) link systems within distribution networks (DNs). The proposed approach operates in coordination with a battery energy storage system (BESS) to enhance the overall efficiency and [...] Read more.
This study introduces a stochastic optimization framework designed to effectively manage power flows in flexible medium-voltage DC (MVDC) link systems within distribution networks (DNs). The proposed approach operates in coordination with a battery energy storage system (BESS) to enhance the overall efficiency and reliability of the power distribution. Given the inherent uncertain characteristics associated with forecasting errors in photovoltaic (PV) generation and load demand, the study employs a distributionally robust chance-constrained optimization technique to mitigate the potential operational risks. To achieve a cooperative and optimized control strategy for MVDC link systems and BESS, the proposed method incorporates a stochastic relaxation of the reliability constraints on bus voltages. By strategically adjusting the conservativeness of these constraints, the proposed framework seeks to maximize the cost-effectiveness of DN operations. The numerical simulations demonstrate that relaxing the strict reliability constraints enables the distribution system operator to optimize the electricity imports more economically, thereby improving the overall financial performance while maintaining system reliability. Through case studies, we showed that the proposed method improves the operational cost by up to 44.7% while maintaining 96.83% bus voltage reliability under PV and load power output uncertainty. Full article
(This article belongs to the Special Issue Advanced Control Techniques for Power Converter and Drives)
Show Figures

Figure 1

16 pages, 1037 KiB  
Article
Generative Learning from Semantically Confused Label Distribution via Auto-Encoding Variational Bayes
by Xinhai Li, Chenxu Meng, Heng Zhou, Yi Guo, Bowen Xue, Tianzuo Yu and Yunan Lu
Electronics 2025, 14(13), 2736; https://doi.org/10.3390/electronics14132736 - 7 Jul 2025
Viewed by 185
Abstract
Label Distribution Learning (LDL) has emerged as a powerful paradigm for addressing label ambiguity, offering a more nuanced quantification of the instance–label relationship compared to traditional single-label and multi-label learning approaches. This paper focuses on the challenge of noisy label distributions, which is [...] Read more.
Label Distribution Learning (LDL) has emerged as a powerful paradigm for addressing label ambiguity, offering a more nuanced quantification of the instance–label relationship compared to traditional single-label and multi-label learning approaches. This paper focuses on the challenge of noisy label distributions, which is ubiquitous in real-world applications due to the annotator subjectivity, algorithmic biases, and experimental errors. Existing related LDL algorithms often assume a linear combination of true and random label distributions when modeling the noisy label distributions, an oversimplification that fails to capture the practical generation processes of noisy label distributions. Therefore, this paper introduces an assumption that the noise in label distributions primarily arises from the semantic confusion between labels and proposes a novel generative label distribution learning algorithm to model the confusion-based generation process of both the feature data and the noisy label distribution data. The proposed model is inferred using variational methods and its effectiveness is demonstrated through extensive experiments across various real-world datasets, showcasing its superiority in handling noisy label distributions. Full article
(This article belongs to the Special Issue Neural Networks: From Software to Hardware)
Show Figures

Graphical abstract

28 pages, 1987 KiB  
Article
LLM-as-a-Judge Approaches as Proxies for Mathematical Coherence in Narrative Extraction
by Brian Keith
Electronics 2025, 14(13), 2735; https://doi.org/10.3390/electronics14132735 - 7 Jul 2025
Viewed by 406
Abstract
Evaluating the coherence of narrative sequences extracted from large document collections is crucial for applications in information retrieval and knowledge discovery. While mathematical coherence metrics based on embedding similarities provide objective measures, they require substantial computational resources and domain expertise to interpret. We [...] Read more.
Evaluating the coherence of narrative sequences extracted from large document collections is crucial for applications in information retrieval and knowledge discovery. While mathematical coherence metrics based on embedding similarities provide objective measures, they require substantial computational resources and domain expertise to interpret. We propose using large language models (LLMs) as judges to evaluate narrative coherence, demonstrating that their assessments correlate with mathematical coherence metrics. Through experiments on two data sets—news articles about Cuban protests and scientific papers from visualization conferences—we show that the LLM judges achieve Pearson correlations up to 0.65 with mathematical coherence while maintaining high inter-rater reliability (ICC > 0.92). The simplest evaluation approach achieves a comparable performance to the more complex approaches, even outperforming them for focused data sets while achieving over 90% of their performance for the more diverse data sets while using less computational resources. Our findings indicate that LLM-as-a-judge approaches are effective as a proxy for mathematical coherence in the context of narrative extraction evaluation. Full article
Show Figures

Figure 1

21 pages, 2797 KiB  
Article
Model-Driven Meta-Learning-Aided Fast Beam Prediction in Millimeter-Wave Communications
by Wenqin Lu, Xueqin Jiang, Yuwen Cao, Tomoaki Ohtsuki and Enjian Bai
Electronics 2025, 14(13), 2734; https://doi.org/10.3390/electronics14132734 - 7 Jul 2025
Viewed by 222
Abstract
Beamforming plays a key role in improving the spectrum utilization efficiency of multi-antenna systems. However, we observe that (i) conventional beam prediction solutions suffer from high model training overhead and computational latency and thus cannot adapt quickly to changing wireless environments, and (ii) [...] Read more.
Beamforming plays a key role in improving the spectrum utilization efficiency of multi-antenna systems. However, we observe that (i) conventional beam prediction solutions suffer from high model training overhead and computational latency and thus cannot adapt quickly to changing wireless environments, and (ii) deep-learning-based beamforming may face the risk of catastrophic oblivion in dynamically changing environments, which can significantly degrade system performance. Inspired by the above challenges, we propose a continuous-learning-inspired beam prediction model for fast beamforming adaptation in dynamic downlink millimeter-wave (mmWave) communications. More specifically, we develop a meta-empirical replay (MER)-based beam prediction model. It combines empirical replay and optimization-based meta-learning. This approach optimizes the trade-offs between transmission and interference in dynamic environments, enabling effective fast beamforming adaptation. Finally, the high-performance gains brought by the proposed model in dynamic communication environments are verified through simulations. The simulation results show that our proposed model not only maintains a high-performance memory for old tasks but also adapts quickly to new tasks. Full article
Show Figures

Figure 1

28 pages, 113310 KiB  
Article
Optimising Wi-Fi HaLow Connectivity: A Framework for Variable Environmental and Application Demands
by Karen Hargreave, Vicky Liu and Luke Kane
Electronics 2025, 14(13), 2733; https://doi.org/10.3390/electronics14132733 - 7 Jul 2025
Viewed by 298
Abstract
As the number of IoT (Internet of Things) devices continues to grow at an exceptional rate, so does the variety of use cases and operating environments. IoT now plays a crucial role in areas including smart cities, medicine and smart agriculture, where environments [...] Read more.
As the number of IoT (Internet of Things) devices continues to grow at an exceptional rate, so does the variety of use cases and operating environments. IoT now plays a crucial role in areas including smart cities, medicine and smart agriculture, where environments vary to include built environments, forest, paddocks and many more. This research examines how Wi-Fi HaLow can be optimised to support the varying environments and a wide variety of applications. Through examining data from performance evaluation testing conducted in varying environments, a framework has been developed. The framework takes inputs relating to the operating environment and application to produce configuration recommendations relating to ideal channel width, MCS (Modulation and Coding Scheme), GI (Guard Interval), antenna selection and distance between communicating devices to provide the optimal performance to support the given use case. The application of the framework is then demonstrated when applied to three various scenarios. This research demonstrates that through the configuration of a number of parameters, Wi-Fi HaLow is a versatile network technology able to support a broad range of IoT use cases. Full article
(This article belongs to the Special Issue Network Architectures for IoT and Cyber-Physical Systems)
Show Figures

Figure 1

25 pages, 878 KiB  
Article
AI-Powered Gamified Scaffolding: Transforming Learning in Virtual Learning Environment
by Xuemei Jiang, Rui Wang, Thuong Hoang, Chathurika Ranaweera, Chengzu Dong and Trina Myers
Electronics 2025, 14(13), 2732; https://doi.org/10.3390/electronics14132732 - 7 Jul 2025
Viewed by 377
Abstract
Gamification has the potential to significantly enhance student engagement and motivation in educational contexts. However, there is a lack of empirical research that compares different guiding strategies between AI-driven gamified and non-gamified modes in virtual learning environments to scaffold language learning. This paper [...] Read more.
Gamification has the potential to significantly enhance student engagement and motivation in educational contexts. However, there is a lack of empirical research that compares different guiding strategies between AI-driven gamified and non-gamified modes in virtual learning environments to scaffold language learning. This paper presents an empirical study that examines the impact of AI-driven gamification and learning strategies on the learning experience and outcomes in virtual environments for English-language learners. A gamified English learning prototype was designed and developed. A between-group experiment was established to compare different gamified scaffolding groups: a traditional linear group (storytelling), an AI-driven gamified linear group (task-based learning), and a gamified exploration group (self-regulated learning). One hundred students learning English as a second language participated in this study, and their learning conditions were evaluated across three dimensions: engagement, performance, and experience. The results suggest that traditional learning methods may not be as effective as the other two approaches; there may be other factors beyond in-game interaction and engagement time that influence learning and engagement. Moreover, the results show that different gamified learning modes are not the key factor affecting language learning. The research presents guidelines that can be applied when gamification and AI are utilised in virtual learning environments. Full article
Show Figures

Figure 1

17 pages, 4763 KiB  
Article
Multi-Band Terahertz Metamaterial Absorber Integrated with Microfluidics and Its Potential Application in Volatile Organic Compound Sensing
by Liang Wang, Bo Zhang, Xiangrui Dong, Qi Lu, Hao Shen, Yi Ni, Yuechen Liu and Haitao Song
Electronics 2025, 14(13), 2731; https://doi.org/10.3390/electronics14132731 - 7 Jul 2025
Viewed by 218
Abstract
In this study, a terahertz microfluidic multi-band sensor was designed. Unlike previous microfluidic absorption sensors that rely on dipole resonance, the proposed sensor uses a physical mechanism for absorption by exciting higher-order lattice resonances in microfluidic structures. With a Fabry–Perot cavity, the sensor [...] Read more.
In this study, a terahertz microfluidic multi-band sensor was designed. Unlike previous microfluidic absorption sensors that rely on dipole resonance, the proposed sensor uses a physical mechanism for absorption by exciting higher-order lattice resonances in microfluidic structures. With a Fabry–Perot cavity, the sensor can form an absorption peak with a high quality factor (Q) and narrow full width at half maximum (FWHM). A high Q value and a narrow FWHM are valuable in the field of sensing and provide strong support for high-precision sensing. On this basis, the sensing performance of the device was investigated. The simulation results clearly show that the absorption sensor has ultra-high sensitivity, which reaches 400 GHz/Refractive Index Unit (RIU). In addition, the sensor generates three absorption peaks, overcoming the limitations of a single frequency band in a composite resonance mode and multidimensional frequency response, which has potential application value in the field of volatile organic compound (VOC) sensing. Full article
Show Figures

Figure 1

24 pages, 3241 KiB  
Article
An Advanced Indoor Localization Method Based on xLSTM and Residual Multimodal Fusion of UWB/IMU Data
by Haoyang Wang, Jiaxing He and Lizhen Cui
Electronics 2025, 14(13), 2730; https://doi.org/10.3390/electronics14132730 - 7 Jul 2025
Viewed by 231
Abstract
To address the limitations of single-modality UWB/IMU systems in complex indoor environments, this study proposes a multimodal fusion localization method based on xLSTM. After extracting features from UWB and IMU data, the xLSTM network enables deep temporal feature learning. A three-stage residual fusion [...] Read more.
To address the limitations of single-modality UWB/IMU systems in complex indoor environments, this study proposes a multimodal fusion localization method based on xLSTM. After extracting features from UWB and IMU data, the xLSTM network enables deep temporal feature learning. A three-stage residual fusion module is introduced to enhance cross-modal complementarity, while a multi-head attention mechanism dynamically adjusts the sensor weights. The end-to-end trained network effectively constructs nonlinear multimodal mappings for two-dimensional position estimation under both static and dynamic non-line-of-sight (NLOS) conditions with human-induced interference. Experimental results demonstrate that the localization errors reach 0.181 m under static NLOS and 0.187 m under dynamic NLOS, substantially outperforming traditional filtering-based approaches. The proposed deep fusion framework significantly improves localization reliability under occlusion and offers an innovative solution for high-precision indoor positioning. Full article
Show Figures

Figure 1

14 pages, 2907 KiB  
Article
Switching Noise Harmonic Reduction for EMI Improvement Through Rising and Falling Time Control Using Gate Resistance
by Jeonghyeon Cheon and Dongwook Kim
Electronics 2025, 14(13), 2729; https://doi.org/10.3390/electronics14132729 - 7 Jul 2025
Viewed by 251
Abstract
Electromagnetic interference (EMI) has become a significant issue as electronic devices become more integrated and achieve high performance. In order to operate at high performance in an integrated system, a high-frequency clock signal is essential to enhance processing speed. However, the harmonic component [...] Read more.
Electromagnetic interference (EMI) has become a significant issue as electronic devices become more integrated and achieve high performance. In order to operate at high performance in an integrated system, a high-frequency clock signal is essential to enhance processing speed. However, the harmonic component of the clock signal or gate signal is one of the major EMI sources that can cause peripheral devices to malfunction and affect their stability and reliability. In this paper, harmonic component analysis of the MOSFET gate signal which depends on gate resistance is conducted. Based on theoretical analysis using Fourier series expansion, gate resistance contributes to harmonic components that are determined by the rising and falling times of a gate signal. Simulation and measurement are conducted using a buck converter as a practical application. The theoretical analysis is validated by simulation and experimental results demonstrate that the magnitude of the harmonics is reduced because increasing the gate resistance extends the rising and falling times. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

11 pages, 989 KiB  
Article
Contrastive Learning with Feature-Level Augmentation for Wireless Signal Representation
by Shiyuan Mu, Shuai Chen, Yong Zu, Zhixi Feng and Shuyuan Yang
Electronics 2025, 14(13), 2728; https://doi.org/10.3390/electronics14132728 - 7 Jul 2025
Viewed by 247
Abstract
The application of self-supervised learning (SSL) is increasingly imperative for advancing wireless communication technologies, particularly in scenarios with limited labeled data. Traditional data-augmentation-based SSL methods have struggled to accurately capture the intricate properties of wireless signals. This letter introduces a novel self-supervised learning [...] Read more.
The application of self-supervised learning (SSL) is increasingly imperative for advancing wireless communication technologies, particularly in scenarios with limited labeled data. Traditional data-augmentation-based SSL methods have struggled to accurately capture the intricate properties of wireless signals. This letter introduces a novel self-supervised learning framework that leverages feature-level augmentation combined with contrastive learning to enhance wireless signal recognition. Extensive experiments conducted in various environments demonstrate that the proposed method achieves improvements of more than 2.56% over the existing supervised learning (SL) methods and SSL methods on the RadioML2016.10a and ADS-B datasets. Moreover, the experimental results show that the proposed SSL pre-training strategy improves performance by 4.67% compared to supervised approaches. These results validate that the proposed method offers stronger generalization capabilities and superior performance when handling different types of wireless signal tasks. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

32 pages, 435 KiB  
Review
Analysis of Data Privacy Breaches Using Deep Learning in Cloud Environments: A Review
by Abdulqawi Mohammed Almosti and M. M. Hafizur Rahman
Electronics 2025, 14(13), 2727; https://doi.org/10.3390/electronics14132727 - 7 Jul 2025
Viewed by 385
Abstract
Despite the advantages of using cloud computing, data breaches and security challenges remain, especially when dealing with sensitive information. The integration of deep learning (DL) techniques in a cloud environment ensures privacy preservation. This review paper analyzes 38 papers published from 2020 to [...] Read more.
Despite the advantages of using cloud computing, data breaches and security challenges remain, especially when dealing with sensitive information. The integration of deep learning (DL) techniques in a cloud environment ensures privacy preservation. This review paper analyzes 38 papers published from 2020 to 2025, focusing on privacy-preserving techniques in DL for cloud environments. Combining different privacy preservation technologies with DL results in improved utility for privacy protection and better security against data breaches than using individual applications such as differential privacy, homomorphic encryption, or federated learning. Further, a discussion is provided on the technical limitations when applying DL with various privacy preservation techniques, which include large communication overhead, lower model accuracy, and high computational cost. Additionally, this review paper presents the latest research in a comprehensive manner and provides directions for future research necessary to develop privacy-preserving DL models. Full article
(This article belongs to the Special Issue Security and Privacy for AI)
Show Figures

Figure 1

18 pages, 3941 KiB  
Article
Method of Collaborative UAV Deployment: Carrier-Assisted Localization with Low-Resource Precision Touchdown
by Krzysztof Kaliszuk, Artur Kierzkowski and Bartłomiej Dziewoński
Electronics 2025, 14(13), 2726; https://doi.org/10.3390/electronics14132726 - 7 Jul 2025
Viewed by 282
Abstract
This study presents a cooperative unmanned aerial system (UAS) designed to enable precise autonomous landings in unstructured environments using low-cost onboard vision technology. This approach involves a carrier UAV with a stabilized RGB camera and a neural inference system, as well as a [...] Read more.
This study presents a cooperative unmanned aerial system (UAS) designed to enable precise autonomous landings in unstructured environments using low-cost onboard vision technology. This approach involves a carrier UAV with a stabilized RGB camera and a neural inference system, as well as a lightweight tailsitter payload UAV with an embedded grayscale vision module. The system relies on visually recognizable landing markers and does not require additional sensors. Field trials comprising full deployments achieved an 80% success rate in autonomous landings, with vertical touchdown occurring within a 1.5 m radius of the target. These results confirm that vision-based marker detection using compact neural models can effectively support autonomous UAV operations in constrained conditions. This architecture offers a scalable alternative to the high complexity of SLAM or terrain-mapping systems. Full article
(This article belongs to the Special Issue Unmanned Aircraft Systems with Autonomous Navigation, 2nd Edition)
Show Figures

Figure 1

22 pages, 696 KiB  
Article
Domain Knowledge-Driven Method for Threat Source Detection and Localization in the Power Internet of Things
by Zhimin Gu, Jing Guo, Jiangtao Xu, Yunxiao Sun and Wei Liang
Electronics 2025, 14(13), 2725; https://doi.org/10.3390/electronics14132725 - 7 Jul 2025
Viewed by 292
Abstract
Although the Power Internet of Things (PIoT) significantly improves operational efficiency by enabling real-time monitoring, intelligent control, and predictive maintenance across the grid, its inherently open and deeply interconnected cyber-physical architecture concurrently introduces increasingly complex and severe security threats. Existing IoT security solutions [...] Read more.
Although the Power Internet of Things (PIoT) significantly improves operational efficiency by enabling real-time monitoring, intelligent control, and predictive maintenance across the grid, its inherently open and deeply interconnected cyber-physical architecture concurrently introduces increasingly complex and severe security threats. Existing IoT security solutions are not fully adapted to the specific requirements of power systems, such as safety-critical reliability, protocol heterogeneity, physical/electrical context awareness, and the incorporation of domain-specific operational knowledge unique to the power sector. These limitations often lead to high false positives (flagging normal operations as malicious) and false negatives (failing to detect actual intrusions), ultimately compromising system stability and security response. To address these challenges, we propose a domain knowledge-driven threat source detection and localization method for the PIoT. The proposed method combines multi-source features—including electrical-layer measurements, network-layer metrics, and behavioral-layer logs—into a unified representation through a multi-level PIoT feature engineering framework. Building on advances in multimodal data integration and feature fusion, our framework employs a hybrid neural architecture combining the TabTransformer to model structured physical and network-layer features with BiLSTM to capture temporal dependencies in behavioral log sequences. This design enables comprehensive threat detection while supporting interpretable and fine-grained source localization. Experiments on a real-world Power Internet of Things (PIoT) dataset demonstrate that the proposed method achieves high detection accuracy and enables the actionable attribution of attack stages aligned with the MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) framework. The proposed approach offers a scalable and domain-adaptable foundation for security analytics in cyber-physical power systems. Full article
Show Figures

Figure 1

23 pages, 728 KiB  
Article
BASK: Backdoor Attack for Self-Supervised Encoders with Knowledge Distillation Survivability
by Yihong Zhang, Guojia Li, Yihui Zhang, Yan Cao, Mingyue Cao and Chengyao Xue
Electronics 2025, 14(13), 2724; https://doi.org/10.3390/electronics14132724 - 6 Jul 2025
Viewed by 254
Abstract
Backdoor attacks in self-supervised learning pose an increasing threat. Recent studies have shown that knowledge distillation can mitigate these attacks by altering feature representations. In response, we propose BASK, a novel backdoor attack that remains effective after distillation. BASK uses feature weighting and [...] Read more.
Backdoor attacks in self-supervised learning pose an increasing threat. Recent studies have shown that knowledge distillation can mitigate these attacks by altering feature representations. In response, we propose BASK, a novel backdoor attack that remains effective after distillation. BASK uses feature weighting and representation alignment strategies to implant persistent backdoors into the encoder’s feature space. This enables transferability to student models. We evaluated BASK on the CIFAR-10 and STL-10 datasets and compared it with existing self-supervised backdoor attacks under four advanced defenses: SEED, MKD, Neural Cleanse, and MiMiC. Our experimental results demonstrate that BASK maintains high attack success rates while preserving downstream task performance. This highlights the robustness of BASK and the limitations of current defense mechanisms. Full article
(This article belongs to the Special Issue Advancements in AI-Driven Cybersecurity and Securing AI Systems)
Show Figures

Figure 1

17 pages, 2461 KiB  
Article
A Throughput Analysis of C+L-Band Optical Networks: A Comparison Between the Use of Band-Dedicated and Single-Wideband Amplification
by Tomás Maia and João Pires
Electronics 2025, 14(13), 2723; https://doi.org/10.3390/electronics14132723 - 6 Jul 2025
Viewed by 233
Abstract
Optical networks today constitute the fundamental backbone infrastructure of telecom and cloud operators. A possible medium-term solution to address the enormous increase in traffic demands faced by these operators is to rely on Super C+ L transmission optical bands, which can offer a [...] Read more.
Optical networks today constitute the fundamental backbone infrastructure of telecom and cloud operators. A possible medium-term solution to address the enormous increase in traffic demands faced by these operators is to rely on Super C+ L transmission optical bands, which can offer a bandwidth of about 12 THz. In this paper, we propose a methodology to compute the throughput of an optical network based on this solution. The methodology involves detailed physical layer modeling, including the impact of stimulated Raman scattering, which is responsible for energy transfer between the two bands. Two approaches are implemented for throughput evaluation: one assuming idealized Gaussian-modulated signals and the other using real modulation formats. For designing such networks, it is crucial to choose the most appropriate technological solution for optical amplification. This could either be a band-dedicated scheme, which uses a separate amplifier for each of the two bands, or a single-wideband amplifier capable of amplifying both bands simultaneously. The simulation results show that the single-wideband scheme provides an average throughput improvement of about 18% compared to the dedicated scheme when using the Gaussian modulation approach. However, with the real modulation approach, the improvement increases significantly to about 32%, highlighting the benefit in developing single-wideband amplifiers for future applications in Super C+L-band networks. Full article
(This article belongs to the Special Issue Optical Networking and Computing)
Show Figures

Figure 1

20 pages, 7451 KiB  
Article
Research on Circulating-Current Suppression Strategy of MMC Based on Passivity-Based Integral Sliding Mode Control for Multiphase Wind Power Grid-Connected Systems
by Wei Zhang, Jianying Li, Mai Zhang, Xiuhai Yang and Dingai Zhong
Electronics 2025, 14(13), 2722; https://doi.org/10.3390/electronics14132722 - 5 Jul 2025
Viewed by 246
Abstract
To deal with the interphase circulating-current problem of modular multilevel converters (MMCs) in multiphase wind power systems, a cooperative circulating-current suppression strategy based on a second-order generalized integrator (SOGI) and passivity-based control–integral sliding mode control (PBC-ISMC) is proposed in this paper. Firstly, a [...] Read more.
To deal with the interphase circulating-current problem of modular multilevel converters (MMCs) in multiphase wind power systems, a cooperative circulating-current suppression strategy based on a second-order generalized integrator (SOGI) and passivity-based control–integral sliding mode control (PBC-ISMC) is proposed in this paper. Firstly, a multiphase permanent magnet direct-drive wind power system topology without a step-up transformer is established. On this basis, SOGI is utilized to construct a circulating current extractor, which is utilized to accurately extract the double-frequency component in the circulating current, and, at the same time, effectively filter out the DC components and high-frequency noise. Secondly, passivity-based control (PBC), with its fast energy dissipation, and integral sliding mode control (ISMC), with its strong robustness, are combined to construct the PBC-ISMC circulating-current suppressor, which realizes the nonlinear decoupling and dynamic immunity of the circulating-current model. Finally, simulation results demonstrate that the proposed strategy significantly reduces the harmonic content of the circulating current, optimizes both the bridge-arm current and output current, and achieves superior suppression performance and dynamic response compared to traditional methods, thereby effectively enhancing system power quality and operational reliability. Full article
Show Figures

Figure 1

13 pages, 3046 KiB  
Article
Stability Analysis of Non-Foster Impedance Inverters
by Boris Okorn and Silvio Hrabar
Electronics 2025, 14(13), 2721; https://doi.org/10.3390/electronics14132721 - 5 Jul 2025
Viewed by 256
Abstract
Recently, active impedance inverters based on non-Foster negative capacitors have been proposed for applications in widely tunable filters. These designs use a traditional Linvill’s topology of the negative capacitor. Unfortunately, the range of external loads needed for the stable operation of such active [...] Read more.
Recently, active impedance inverters based on non-Foster negative capacitors have been proposed for applications in widely tunable filters. These designs use a traditional Linvill’s topology of the negative capacitor. Unfortunately, the range of external loads needed for the stable operation of such active inverters is rather limited. However, there is also the negative capacitor based on a recently proposed loss-compensated passive structure. This novel design promises stability-robust behavior for an extremely wide range of external loads. In this study, we compare the stability properties of both approaches and show that the design based on the loss-compensated passive structure is more robust. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

18 pages, 4805 KiB  
Article
Re-Usable Workflow for Collecting and Analyzing Open Data of Valenbisi
by Áron Magura, Marianna Zichar and Róbert Tóth
Electronics 2025, 14(13), 2720; https://doi.org/10.3390/electronics14132720 - 5 Jul 2025
Viewed by 338
Abstract
This paper proposes a general workflow for collecting and analyzing open data from Bicycle Sharing Systems (BSSs) that was developed using data from the Valenbisi system, operated in Valencia by the French company JCDecaux; however, the stages of the proposed workflow are service-independent [...] Read more.
This paper proposes a general workflow for collecting and analyzing open data from Bicycle Sharing Systems (BSSs) that was developed using data from the Valenbisi system, operated in Valencia by the French company JCDecaux; however, the stages of the proposed workflow are service-independent and can be applied broadly. Cycling has become an increasingly popular mode of transportation, leading to the emergence of BSSs in modern cities. Parallel to this, Smart City solutions have been implemented using Internet of Things (IoT) technologies, such as embedded sensors and GPS-based communication systems, which have become essential to everyday life. When public transportation services or bicycle sharing systems are used, real-time information about the services is provided to customers, including vehicle tracking based on GPS technology and the availability of bikes via sensors installed at bike rental stations. The bike stations were examined from two different perspectives: first, their daily usage, and second, the types of facilities located in their surroundings. Based on these two approaches, the overlap between the clustering results was analyzed—specifically, the similarity in how stations could be grouped and the correlation between their usage and locations. To enhance the raw data retrieved from the service provider’s official API, the stations were annotated based on OpenStreetMap and Overpass API data. Data visualization was created using Tableau from Salesforce. Based on the results, an agreement of 62% was found between the results of the two different clustering approaches. Full article
Show Figures

Figure 1

31 pages, 3939 KiB  
Article
Effective 8T Reconfigurable SRAM for Data Integrity and Versatile In-Memory Computing-Based AI Acceleration
by Sreeja S. Kumar and Jagadish Nayak
Electronics 2025, 14(13), 2719; https://doi.org/10.3390/electronics14132719 - 5 Jul 2025
Viewed by 483
Abstract
For data-intensive applications like edge AI and image processing, we present a new reconfigurable 8T SRAM-based in-memory computing (IMC) macro designed for high-performance and energy-efficient operation. This architecture mitigates von Neumann limitations through numerous major breakthroughs. We built a new architecture with an [...] Read more.
For data-intensive applications like edge AI and image processing, we present a new reconfigurable 8T SRAM-based in-memory computing (IMC) macro designed for high-performance and energy-efficient operation. This architecture mitigates von Neumann limitations through numerous major breakthroughs. We built a new architecture with an adjustable capacitance array to substantially increase the multiply-and-accumulate (MAC) engine’s accuracy. It achieves 10–20 TOPS/W and >95% accuracy for 4–10-bit operations and is robust across PVT changes. By supporting binary and ternary neural networks (BNN/TNN) with XNOR-and-accumulate logic, a dual-mode inference engine further expands capabilities. With sub-5 ns mode switching, it can achieve up to 30 TOPS/W efficiency and >97% accuracy. In-memory Hamming error correction is implemented directly using integrated XOR circuitry. This technique eliminates off-chip ECC with >99% error correction and >98% MAC accuracy. Machine learning-aided co-optimization ensures sense amplifier dependability. To ensure CMOS compatibility, the macro may perform Boolean logic operations using normal 8T SRAM cells. Comparative circuit-level simulations show a 31.54% energy efficiency boost and a 74.81% delay reduction over other SRAM-based IMC solutions. These improvements make our macro ideal for real-time AI acceleration, cryptography, and next-generation edge computing, enabling advanced compute-in-memory systems. Full article
Show Figures

Figure 1

19 pages, 1891 KiB  
Article
Comparative Study on Energy Consumption of Neural Networks by Scaling of Weight-Memory Energy Versus Computing Energy for Implementing Low-Power Edge Intelligence
by Ilpyung Yoon, Jihwan Mun and Kyeong-Sik Min
Electronics 2025, 14(13), 2718; https://doi.org/10.3390/electronics14132718 - 5 Jul 2025
Viewed by 437
Abstract
Energy consumption has emerged as a critical design constraint in deploying high-performance neural networks, especially on edge devices with limited power resources. In this paper, a comparative study is conducted for two prevalent deep learning paradigms—convolutional neural networks (CNNs), exemplified by ResNet18, and [...] Read more.
Energy consumption has emerged as a critical design constraint in deploying high-performance neural networks, especially on edge devices with limited power resources. In this paper, a comparative study is conducted for two prevalent deep learning paradigms—convolutional neural networks (CNNs), exemplified by ResNet18, and transformer-based large language models (LLMs), represented by GPT3-small, Llama-7B, and GPT3-175B. By analyzing how the scaling of memory energy versus computing energy affects the energy consumption of neural networks with different batch sizes (1, 4, 8, 16), it is shown that ResNet18 transitions from a memory energy-limited regime at low batch sizes to a computing energy-limited regime at higher batch sizes due to its extensive convolution operations. On the other hand, GPT-like models remain predominantly memory-bound, with large parameter tensors and frequent key–value (KV) cache lookups accounting for most of the total energy usage. Our results reveal that reducing weight-memory energy is particularly effective in transformer architectures, while improving multiply–accumulate (MAC) efficiency significantly benefits CNNs at higher workloads. We further highlight near-memory and in-memory computing approaches as promising strategies to lower data-transfer costs and enhance power efficiency in large-scale deployments. These findings offer actionable insights for architects and system designers aiming to optimize artificial intelligence (AI) performance under stringent energy budgets on battery-powered edge devices. Full article
Show Figures

Figure 1

49 pages, 1388 KiB  
Review
Evaluating Trustworthiness in AI: Risks, Metrics, and Applications Across Industries
by Aleksandra Nastoska, Bojana Jancheska, Maryan Rizinski and Dimitar Trajanov
Electronics 2025, 14(13), 2717; https://doi.org/10.3390/electronics14132717 - 4 Jul 2025
Viewed by 684
Abstract
Ensuring the trustworthiness of artificial intelligence (AI) systems is critical as they become increasingly integrated into domains like healthcare, finance, and public administration. This paper explores frameworks and metrics for evaluating AI trustworthiness, focusing on key principles such as fairness, transparency, privacy, and [...] Read more.
Ensuring the trustworthiness of artificial intelligence (AI) systems is critical as they become increasingly integrated into domains like healthcare, finance, and public administration. This paper explores frameworks and metrics for evaluating AI trustworthiness, focusing on key principles such as fairness, transparency, privacy, and security. This study is guided by two central questions: how can trust in AI systems be systematically measured across the AI lifecycle, and what are the trade-offs involved when optimizing for different trustworthiness dimensions? By examining frameworks such as the NIST AI Risk Management Framework (AI RMF), the AI Trust Framework and Maturity Model (AI-TMM), and ISO/IEC standards, this study bridges theoretical insights with practical applications. We identify major risks across the AI lifecycle stages and outline various metrics to address challenges in system reliability, bias mitigation, and model explainability. This study includes a comparative analysis of existing standards and their application across industries to illustrate their effectiveness. Real-world case studies, including applications in healthcare, financial services, and autonomous systems, demonstrate approaches to applying trust metrics. The findings reveal that achieving trustworthiness involves navigating trade-offs between competing metrics, such as fairness versus efficiency or privacy versus transparency, and emphasizes the importance of interdisciplinary collaboration for robust AI governance. Emerging trends suggest the need for adaptive frameworks for AI trustworthiness that evolve alongside advancements in AI technologies. This paper contributes to the field by proposing a comprehensive review of existing frameworks with guidelines for building resilient, ethical, and transparent AI systems, ensuring their alignment with regulatory requirements and societal expectations. Full article
Show Figures

Figure 1

16 pages, 3211 KiB  
Article
Exploiting a Deformable and Dilated Feature Fusion Module for Object Detection
by Xiaoxia Qi, Md Gapar Md Johar, Ali Khatibi and Jacquline Tham
Electronics 2025, 14(13), 2716; https://doi.org/10.3390/electronics14132716 - 4 Jul 2025
Viewed by 296
Abstract
We propose the Deformable and Dilated Feature Fusion Module (D2FM) in this paper to enhance the adaptability and flexibility of feature extraction in object detection tasks. Unlike traditional convolutions and Deformable Convolutional Networks (DCNs), D2FM dynamically predicts both dilation coefficients, and additionally predicts [...] Read more.
We propose the Deformable and Dilated Feature Fusion Module (D2FM) in this paper to enhance the adaptability and flexibility of feature extraction in object detection tasks. Unlike traditional convolutions and Deformable Convolutional Networks (DCNs), D2FM dynamically predicts both dilation coefficients, and additionally predicts spatial offsets based on the features at the dilated positions to better capture multi-scale and context-dependent patterns. Furthermore, a self-attention mechanism is introduced to fuse geometry-aware and enhanced local features. To efficiently integrate D2FM into detection frameworks, we design the D2FM-HierarchyEncoder, which employs hierarchical channel reduction and depth-dependent stacking of D2FM blocks, balancing representation capability and computational cost. We apply our design to the YOLOv11 detector, forming the D2YOLOv11 model. On the COCO 2017 dataset, our method achieves 47.9 AP when implemented with the YOLOv11s backbone network, representing a 1.0 AP improvement over the baseline YOLOv11 approach. Full article
Show Figures

Figure 1

12 pages, 501 KiB  
Article
An Overview of GEO Satellite Communication Simulation Systems
by Shaoyang Li, Yanli Qi and Kezhen Song
Electronics 2025, 14(13), 2715; https://doi.org/10.3390/electronics14132715 - 4 Jul 2025
Viewed by 282
Abstract
Geostationary Earth orbit (GEO) satellite communication systems have become increasingly significant in global communication networks and national strategic infrastructure, owing to their advantages of extensive coverage, high capacity, and robust reliability. Constructing accurate and reliable simulation systems is essential to support the design, [...] Read more.
Geostationary Earth orbit (GEO) satellite communication systems have become increasingly significant in global communication networks and national strategic infrastructure, owing to their advantages of extensive coverage, high capacity, and robust reliability. Constructing accurate and reliable simulation systems is essential to support the design, evaluation, and optimization of GEO satellite communication systems. This article first reviews the current developments and application prospects of GEO satellite communication systems and highlights the critical role of simulation technologies in system design and performance assessment. Subsequently, a systematic analysis is conducted on two core modules of simulation systems, i.e., coverage analysis and resource management and scheduling. Moreover, this article provides a comprehensive comparison and evaluation of mainstream satellite communication simulation platforms and tools. This review aims to offer valuable insights and guidance for future research and applications in GEO satellite communication simulation, thereby promoting technological innovation and advancement in related fields. Full article
Show Figures

Figure 1

15 pages, 1529 KiB  
Article
Peak Age of Information Optimization in Cell-Free Massive Random Access Networks
by Zhiru Zhao, Yuankang Huang and Wen Zhan
Electronics 2025, 14(13), 2714; https://doi.org/10.3390/electronics14132714 - 4 Jul 2025
Viewed by 259
Abstract
With the vigorous development of Internet of Things technologies, Cell-Free Radio Access Network (CF-RAN), leveraging its distributed coverage and single/multi-antenna Access Point (AP) coordination advantages, has become a key technology for supporting massive Machine-Type Communication (mMTC). However, under the grant-free random access mechanism, [...] Read more.
With the vigorous development of Internet of Things technologies, Cell-Free Radio Access Network (CF-RAN), leveraging its distributed coverage and single/multi-antenna Access Point (AP) coordination advantages, has become a key technology for supporting massive Machine-Type Communication (mMTC). However, under the grant-free random access mechanism, this network architecture faces the problem of information freshness degradation due to channel congestion. To address this issue, a joint decoding model based on logical grouping architecture is introduced to analyze the correlation between the successful packet transmission probability and the Peak Age of Information (PAoI) in both single-AP and multi-AP scenarios. On this basis, a global Particle Swarm Optimization (PSO) algorithm is designed to dynamically adjust the channel access probability to minimize the average PAoI across the network. To reduce signaling overhead, a PSO algorithm based on local topology information is further proposed to achieve collaborative optimization among neighboring APs. Simulation results demonstrate that the global PSO algorithm can achieve performance closely approximating the optimum, while the local PSO algorithm maintains similar performance without the need for global information. It is especially suitable for large-scale access scenarios with wide area coverage, providing an efficient solution for optimizing information freshness in CF-RAN. Full article
Show Figures

Figure 1

17 pages, 824 KiB  
Article
Resilient Event-Triggered H Control for a Class of LFC Systems Subject to Deception Attacks
by Yunfan Wang, Zesheng Xi, Bo Zhang, Tao Zhang and Chuan He
Electronics 2025, 14(13), 2713; https://doi.org/10.3390/electronics14132713 - 4 Jul 2025
Viewed by 173
Abstract
This paper explores an event-triggered load frequency control (LFC) strategy for smart grids incorporating electric vehicles (EVs) under the influence of random deception attacks. The aggressive attack signals are launched over the channels between the sensor and controller, compromising the integrity of transmitted [...] Read more.
This paper explores an event-triggered load frequency control (LFC) strategy for smart grids incorporating electric vehicles (EVs) under the influence of random deception attacks. The aggressive attack signals are launched over the channels between the sensor and controller, compromising the integrity of transmitted data and disrupting LFC commands. For the purpose of addressing bandwidth constraints, an event-triggered transmission scheme (ETTS) is developed to minimize communication frequency. Additionally, to mitigate the impact of random deception attacks in public environment, an integrated networked power grid model is proposed, where the joint impact of ETTS and deceptive interference is captured within a unified analytical structure. Based on this framework, a sufficient condition for stabilization is established, enabling the concurrent design of the H controller gain and the triggering condition. Finally, two case studies are offered to illustrate the effectiveness of the employed scheme. Full article
(This article belongs to the Special Issue Knowledge Information Extraction Research)
Show Figures

Figure 1

22 pages, 14822 KiB  
Article
Partial Ambiguity Resolution Strategy for Single-Frequency GNSS RTK/INS Tightly Coupled Integration in Urban Environments
by Dashuai Chai, Xiqi Wang, Yipeng Ning and Wengang Sang
Electronics 2025, 14(13), 2712; https://doi.org/10.3390/electronics14132712 - 4 Jul 2025
Viewed by 167
Abstract
Single-frequency global navigation satellite system/inertial navigation system (GNSS/INS) integration has wide application prospects in urban environments; however, correct integer ambiguity is the major challenge because of GNSS-blocked environments. In this paper, a sequential strategy of partial ambiguity resolution (PAR) of GNSS/INS for tightly [...] Read more.
Single-frequency global navigation satellite system/inertial navigation system (GNSS/INS) integration has wide application prospects in urban environments; however, correct integer ambiguity is the major challenge because of GNSS-blocked environments. In this paper, a sequential strategy of partial ambiguity resolution (PAR) of GNSS/INS for tightly coupled integration based on the robust posteriori residual, elevation angle, and azimuth in the body frame using INS aids is presented. First, the satellite is eliminated if the maximum absolute value of the robust posteriori residuals exceeds the set threshold. Otherwise, the satellites with a minimum elevation angle of less than or equal to 35° are successively eliminated. If satellites have elevation angles greater than 35°, these satellites are divided into different quadrants based on their azimuths calculated in body frame. The satellite with the maximum azimuth in each quadrant is selected as the candidate satellite, the candidate satellites are eliminated one by one, and the remaining satellites are used to calculate the position dilution of the precision (PDOP). Finally, the candidate satellite with the lowest PDOP is eliminated. Two sets of vehicle-borne data with a low-cost GNSS/INS integrated system are used to analyze the performance of the proposed algorithm. These experiments demonstrate that the proposed algorithm has the highest ambiguity fixing rates among all the designed PAR methods, and the fixing rates for these two sets of data are 99.40% and 98.74%, respectively. Additionally, among all the methods compared in this paper, the proposed algorithm demonstrates the best positioning performance in GNSS-blocked environments. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop