Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (165)

Search Parameters:
Keywords = jitter generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 3558 KB  
Article
Realistic Performance Assessment of Machine Learning Algorithms for 6G Network Slicing: A Dual-Methodology Approach with Explainable AI Integration
by Sümeye Nur Karahan, Merve Güllü, Deniz Karhan, Sedat Çimen, Mustafa Serdar Osmanca and Necaattin Barışçı
Electronics 2025, 14(19), 3841; https://doi.org/10.3390/electronics14193841 - 27 Sep 2025
Abstract
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized [...] Read more.
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized conditions and their actual effectiveness in realistic deployment scenarios. This study presents a comprehensive comparative analysis of two distinct preprocessing methodologies for 6G network slicing classification: Pure Raw Data Analysis (PRDA) and Literature-Validated Realistic Transformations (LVRTs). We evaluate the impact of these strategies on algorithm performance, resilience characteristics, and practical deployment feasibility to bridge the laboratory–reality gap in 6G network optimization. Our experimental methodology involved testing eleven machine learning algorithms—including traditional ML, ensemble methods, and deep learning approaches—on a dataset comprising 10,000 network slicing samples (expanded to 21,033 through realistic transformations) across five network slice types. The LVRT methodology incorporates realistic operational impairments including market-driven class imbalance (9:1 ratio), multi-layer interference patterns, and systematic missing data reflecting authentic 6G deployment challenges. The experimental results revealed significant differences in algorithm behavior between the two preprocessing approaches. Under PRDA conditions, deep learning models achieved perfect accuracy (100% for CNN and FNN), while traditional algorithms ranged from 60.9% to 89.0%. However, LVRT results exposed dramatic performance variations, with accuracies spanning from 58.0% to 81.2%. Most significantly, we discovered that algorithms achieving excellent laboratory performance experience substantial degradation under realistic conditions, with CNNs showing an 18.8% accuracy loss (dropping from 100% to 81.2%), FNNs experiencing an 18.9% loss (declining from 100% to 81.1%), and Naive Bayes models suffering a 34.8% loss (falling from 89% to 58%). Conversely, SVM (RBF) and Logistic Regression demonstrated counter-intuitive resilience, improving by 14.1 and 10.3 percentage points, respectively, under operational stress, demonstrating superior adaptability to realistic network conditions. This study establishes a resilience-based classification framework enabling informed algorithm selection for diverse 6G deployment scenarios. Additionally, we introduce a comprehensive explainable artificial intelligence (XAI) framework using SHAP analysis to provide interpretable insights into algorithm decision-making processes. The XAI analysis reveals that Packet Loss Budget emerges as the dominant feature across all algorithms, while Slice Jitter and Slice Latency constitute secondary importance features. Cross-scenario interpretability consistency analysis demonstrates that CNN, LSTM, and Naive Bayes achieve perfect or near-perfect consistency scores (0.998–1.000), while SVM and Logistic Regression maintain high consistency (0.988–0.997), making them suitable for regulatory compliance scenarios. In contrast, XGBoost shows low consistency (0.106) despite high accuracy, requiring intensive monitoring for deployment. This research contributes essential insights for bridging the critical gap between algorithm development and deployment success in next-generation wireless networks, providing evidence-based guidelines for algorithm selection based on accuracy, resilience, and interpretability requirements. Our findings establish quantitative resilience boundaries: algorithms achieving >99% laboratory accuracy exhibit 58–81% performance under realistic conditions, with CNN and FNN maintaining the highest absolute accuracy (81.2% and 81.1%, respectively) despite experiencing significant degradation from laboratory conditions. Full article
Show Figures

Figure 1

22 pages, 4882 KB  
Article
82.5 GHz Photonic W-Band IM/DD PS-PAM4 Wireless Transmission over 300 m Based on Balanced and Lightweight DNN Equalizer Cascaded with Clustering Algorithm
by Jingtao Ge, Jie Zhang, Sicong Xu, Qihang Wang, Jingwen Lin, Sheng Hu, Xin Lu, Zhihang Ou, Siqi Wang, Tong Wang, Yichen Li, Yuan Ma, Jiali Chen, Tensheng Zhang and Wen Zhou
Sensors 2025, 25(19), 5986; https://doi.org/10.3390/s25195986 - 27 Sep 2025
Abstract
With the rise of 6G, the exponential growth of data traffic, the proliferation of emerging applications, and the ubiquity of smart devices, the demand for spectral resources is unprecedented. Terahertz communication (100 GHz–3 THz) plays a key role in alleviating spectrum scarcity through [...] Read more.
With the rise of 6G, the exponential growth of data traffic, the proliferation of emerging applications, and the ubiquity of smart devices, the demand for spectral resources is unprecedented. Terahertz communication (100 GHz–3 THz) plays a key role in alleviating spectrum scarcity through ultra-broadband transmission. In this study, terahertz optical carrier-based systems are employed, where fiber-optic components are used to generate the optical signals, and the signal is transmitted via direct detection in the receiver side, without relying on fiber-optic transmission. In these systems, deep learning-based equalization effectively compensates for nonlinear distortions, while probability shaping (PS) enhances system capacity under modulation constraints. However, the probability distribution of signals processed by PS varies with amplitude, making it challenging to extract useful information from the minority class, which in turn limits the effectiveness of nonlinear equalization. Furthermore, in IM-DD systems, optical multipath interference (MPI) noise introduces signal-dependent amplitude jitter after direct detection, degrading system performance. To address these challenges, we propose a lightweight neural network equalizer assisted by the Synthetic Minority Oversampling Technique (SMOTE) and a clustering method. Applying SMOTE prior to the equalizer mitigates training difficulties arising from class imbalance, while the low-complexity clustering algorithm after the equalizer identifies edge jitter levels for decision-making. This joint approach compensates for both nonlinear distortion and jitter-related decision errors. Based on this algorithm, we conducted a 3.75 Gbaud W-band PAM4 wireless transmission experiment over 300 m at Fudan University’s Handan campus, achieving a bit error rate of 1.32 × 10−3, which corresponds to a 70.7% improvement over conventional schemes. Compared to traditional equalizers, the proposed new equalizer reduces algorithm complexity by 70.6% and training sequence length by 33%, while achieving the same performance. These advantages highlight its significant potential for future optical carrier-based wireless communication systems. Full article
(This article belongs to the Special Issue Recent Advances in Optical Wireless Communications)
Show Figures

Figure 1

22 pages, 8860 KB  
Article
Generating Multi-View Action Data from a Monocular Camera Video by Fusing Human Mesh Recovery and 3D Scene Reconstruction
by Hyunsu Kim and Yunsik Son
Appl. Sci. 2025, 15(19), 10372; https://doi.org/10.3390/app151910372 - 24 Sep 2025
Viewed by 72
Abstract
Multi-view data, captured from various perspectives, is crucial for training view-invariant human action recognition models, yet its acquisition is hindered by spatio-temporal constraints and high costs. This study aims to develop the Pose Scene EveryWhere (PSEW) framework, which automatically generates temporally consistent, multi-view [...] Read more.
Multi-view data, captured from various perspectives, is crucial for training view-invariant human action recognition models, yet its acquisition is hindered by spatio-temporal constraints and high costs. This study aims to develop the Pose Scene EveryWhere (PSEW) framework, which automatically generates temporally consistent, multi-view 3D human action data from a single monocular video. The proposed framework first predicts 3D human parameters from each video frame using a deep learning-based Human Mesh Recovery (HMR) model. Subsequently, it applies tracking, linear interpolation, and Kalman filtering to refine temporal consistency and produce naturalistic motion. The refined human meshes are then reconstructed into a virtual 3D scene by estimating a stable floor plane for alignment, and finally, novel-view videos are rendered using user-defined virtual cameras. As a result, the framework successfully generated multi-view data with realistic, jitter-free motion from a single video input. To assess fidelity to the original motion, we used Root Mean Square Error (RMSE) and Mean Per Joint Position Error (MPJPE) as metrics, achieving low average errors in both 2D (RMSE: 0.172; MPJPE: 0.202) and 3D (RMSE: 0.145; MPJPE: 0.206) space. PSEW provides an efficient, scalable, and low-cost solution that overcomes the limitations of traditional data collection methods, offering a remedy for the scarcity of training data for action recognition models. Full article
(This article belongs to the Special Issue Advanced Technologies Applied for Object Detection and Tracking)
Show Figures

Figure 1

38 pages, 3071 KB  
Article
A Hybrid Framework for the Sensitivity Analysis of Software-Defined Networking Performance Metrics Using Design of Experiments and Machine Learning Techniques
by Chekwube Ezechi, Mobayode O. Akinsolu, Wilson Sakpere, Abimbola O. Sangodoyin, Uyoata E. Uyoata, Isaac Owusu-Nyarko and Folahanmi T. Akinsolu
Information 2025, 16(9), 783; https://doi.org/10.3390/info16090783 - 9 Sep 2025
Viewed by 416
Abstract
Software-defined networking (SDN) is a transformative approach for managing modern network architectures, particularly in Internet-of-Things (IoT) applications. However, ensuring the optimal SDN performance and security often needs a robust sensitivity analysis (SA). To complement existing SA methods, this study proposes a new SA [...] Read more.
Software-defined networking (SDN) is a transformative approach for managing modern network architectures, particularly in Internet-of-Things (IoT) applications. However, ensuring the optimal SDN performance and security often needs a robust sensitivity analysis (SA). To complement existing SA methods, this study proposes a new SA framework that integrates design of experiments (DOE) and machine-learning (ML) techniques. Although existing SA methods have been shown to be effective and scalable, most of these methods have yet to hybridize anomaly detection and classification (ADC) and data augmentation into a single, unified framework. To fill this gap, a targeted application of well-established existing techniques is proposed. This is achieved by hybridizing these existing techniques to undertake a more robust SA of a typified SDN-reliant IoT network. The proposed hybrid framework combines Latin hypercube sampling (LHS)-based DOE and generative adversarial network (GAN)-driven data augmentation to improve SA and support ADC in SDN-reliant IoT networks. Hence, it is called DOE-GAN-SA. In DOE-GAN-SA, LHS is used to ensure uniform parameter sampling, while GAN is used to generate synthetic data to augment data derived from typified real-world SDN-reliant IoT network scenarios. DOE-GAN-SA also employs a classification and regression tree (CART) to validate the GAN-generated synthetic dataset. Through the proposed framework, ADC is implemented, and an artificial neural network (ANN)-driven SA on an SDN-reliant IoT network is carried out. The performance of the SDN-reliant IoT network is analyzed under two conditions: namely, a normal operating scenario and a distributed-denial-of-service (DDoS) flooding attack scenario, using throughput, jitter, and response time as performance metrics. To statistically validate the experimental findings, hypothesis tests are conducted to confirm the significance of all the inferences. The results demonstrate that integrating LHS and GAN significantly enhances SA, enabling the identification of critical SDN parameters affecting the modeled SDN-reliant IoT network performance. Additionally, ADC is also better supported, achieving higher DDoS flooding attack detection accuracy through the incorporation of synthetic network observations that emulate real-time traffic. Overall, this work highlights the potential of hybridizing LHS-based DOE, GAN-driven data augmentation, and ANN-assisted SA for robust network behavioral analysis and characterization in a new hybrid framework. Full article
(This article belongs to the Special Issue Data Privacy Protection in the Internet of Things)
Show Figures

Graphical abstract

15 pages, 1780 KB  
Article
Prosodic Spatio-Temporal Feature Fusion with Attention Mechanisms for Speech Emotion Recognition
by Kristiawan Nugroho, Imam Husni Al Amin, Nina Anggraeni Noviasari and De Rosal Ignatius Moses Setiadi
Computers 2025, 14(9), 361; https://doi.org/10.3390/computers14090361 - 31 Aug 2025
Viewed by 659
Abstract
Speech Emotion Recognition (SER) plays a vital role in supporting applications such as healthcare, human–computer interaction, and security. However, many existing approaches still face challenges in achieving robust generalization and maintaining high recall, particularly for emotions related to stress and anxiety. This study [...] Read more.
Speech Emotion Recognition (SER) plays a vital role in supporting applications such as healthcare, human–computer interaction, and security. However, many existing approaches still face challenges in achieving robust generalization and maintaining high recall, particularly for emotions related to stress and anxiety. This study proposes a dual-stream hybrid model that combines prosodic features with spatio-temporal representations derived from the Multitaper Mel-Frequency Spectrogram (MTMFS) and the Constant-Q Transform Spectrogram (CQTS). Prosodic cues, including pitch, intensity, jitter, shimmer, HNR, pause rate, and speech rate, were processed using dense layers, while MTMFS and CQTS features were encoded with CNN and BiGRU. A Multi-Head Attention mechanism was then applied to adaptively fuse the two feature streams, allowing the model to focus on the most relevant emotional cues. Evaluations conducted on the RAVDESS dataset with subject-independent 5-fold cross-validation demonstrated an accuracy of 97.64% and a macro F1-score of 0.9745. These results confirm that combining prosodic and advanced spectrogram features with attention-based fusion improves precision, recall, and overall robustness, offering a promising framework for more reliable SER systems. Full article
(This article belongs to the Special Issue Multimodal Pattern Recognition of Social Signals in HCI (2nd Edition))
Show Figures

Graphical abstract

16 pages, 7655 KB  
Article
A Low-Jitter Delay Synchronization System Applied to Ti:sapphire Femtosecond Laser Amplifier
by Mengyao Wu, Guodong Liu, Meixuan He, Wenjun Shu, Yunpeng Jiao, Haojie Li, Weilai Yao and Xindong Liang
Appl. Sci. 2025, 15(17), 9424; https://doi.org/10.3390/app15179424 - 28 Aug 2025
Viewed by 540
Abstract
Femtosecond lasers have evolved continuously over the past three decades, enabling the transition of research from fundamental studies in atomic and molecular physics to the realm of practical applications. In femtosecond laser amplifiers, to ensure strict synchronization between the seed laser pulse and [...] Read more.
Femtosecond lasers have evolved continuously over the past three decades, enabling the transition of research from fundamental studies in atomic and molecular physics to the realm of practical applications. In femtosecond laser amplifiers, to ensure strict synchronization between the seed laser pulse and the pump laser, enabling their precise overlap during the amplification process and avoiding a decline in pulse amplification efficiency and the generation of undesired phase noise, this study designed a synchronous timing signal generation system based on the combination of FPGA and analog delay. This system was investigated from three aspects: delay pulse width adjustment within a certain range, precise delay resolution, and external trigger jitter compensation. By using a FPGA digital counter to achieve coarse-delay control over a wide range and combining it with the method of passive precise fine delay, the system can generate synchronous delay signals with a large delay range, high precision, and multiple channels. Regarding the problem of asynchronous phase between the external trigger and the internal clock, a jitter compensation circuit was proposed, consisting of an active gated integrator and an output comparator, which compensates for the uncertainty of trigger timing through analog delay. The verification of this study shows that the system operates stably under an external trigger with a repetition frequency of 80 MHz. The output delay range is from 10 ns to 100 μs, the coarse-delay resolution is 10 ns, the fine-delay adjustment step is 1.25 ns, and the pulse jitter is reduced from a maximum of 10 ns to the hundred-picosecond level. This meets the requirements of femtosecond laser amplifiers for synchronous trigger signals and offers essential technical support and fundamental assurance for the high-power and high-efficiency amplification of Ti:sapphire ultrashort laser pulses. Full article
Show Figures

Figure 1

25 pages, 4739 KB  
Article
YOLOv5s-F: An Improved Algorithm for Real-Time Monitoring of Small Targets on Highways
by Jinhao Guo, Guoqing Geng, Liqin Sun and Zhifan Ji
World Electr. Veh. J. 2025, 16(9), 483; https://doi.org/10.3390/wevj16090483 - 25 Aug 2025
Viewed by 580
Abstract
To address the challenges of real-time monitoring via highway vehicle-mounted cameras—specifically, the difficulty in detecting distant pedestrians and vehicles in real time—this study proposes an enhanced object detection algorithm, YOLOv5s-F. Firstly, the FasterNet network structure is adopted to improve the model’s runtime speed. [...] Read more.
To address the challenges of real-time monitoring via highway vehicle-mounted cameras—specifically, the difficulty in detecting distant pedestrians and vehicles in real time—this study proposes an enhanced object detection algorithm, YOLOv5s-F. Firstly, the FasterNet network structure is adopted to improve the model’s runtime speed. Secondly, the attention mechanism BRA, which is derived from the Transformer algorithm, and a 160 × 160 small-object detection layer are introduced to enhance small target detection performance. Thirdly, the improved upsampling operator CARAFE is incorporated to boost the localization and classification accuracy of small objects. Finally, Focal EIoU is employed as the localization loss function to accelerate model training convergence. Quantitative experiments on high-speed sequences show that Focal EIoU reduces bounding box jitter by 42.9% and improves tracking stability (consecutive frame overlap) by 11.4% compared to CIoU, while accelerating convergence by 17.6%. Results show that compared with the YOLOv5s baseline network, the proposed algorithm reduces computational complexity and parameter count by 10.1% and 24.6%, respectively, while increasing detection speed and accuracy by 15.4% and 2.1%. Transfer learning experiments on the VisDrone2019 and Highway-100k dataset demonstrate that the algorithm outperforms YOLOv5s in average precision across all target categories. On NVIDIA Jetson Xavier NX, YOLOv5s-F achieves 32 FPS after quantization, meeting the real-time requirements of in-vehicle monitoring. The YOLOv5s-F algorithm not only meets the real-time detection and accuracy requirements for small objects but also exhibits strong generalization capabilities. This study clarifies core challenges in highway small-target detection and achieves accuracy–speed improvements via three key innovations, with all experiments being reproducible. If any researchers need the code and dataset of this study, they can consult the author through email. Full article
(This article belongs to the Special Issue Recent Advances in Autonomous Vehicles)
Show Figures

Figure 1

52 pages, 3006 KB  
Article
Empirical Performance Analysis of WireGuard vs. OpenVPN in Cloud and Virtualised Environments Under Simulated Network Conditions
by Joel Anyam, Rajiv Ranjan Singh, Hadi Larijani and Anand Philip
Computers 2025, 14(8), 326; https://doi.org/10.3390/computers14080326 - 13 Aug 2025
Viewed by 2505
Abstract
With the rise in cloud computing and virtualisation, secure and efficient VPN solutions are essential for network connectivity. We present a systematic performance comparison of OpenVPN (v2.6.12) and WireGuard (v1.0.20210914) across Azure and VMware environments, evaluating throughput, latency, jitter, packet loss, and resource [...] Read more.
With the rise in cloud computing and virtualisation, secure and efficient VPN solutions are essential for network connectivity. We present a systematic performance comparison of OpenVPN (v2.6.12) and WireGuard (v1.0.20210914) across Azure and VMware environments, evaluating throughput, latency, jitter, packet loss, and resource utilisation. Testing revealed that the protocol performance is highly context dependent. In VMware environments, WireGuard demonstrated a superior TCP throughput (210.64 Mbps vs. 110.34 Mbps) and lower packet loss (12.35% vs. 47.01%). In Azure environments, both protocols achieved a similar baseline throughput (~280–290 Mbps), though OpenVPN performed better under high-latency conditions (120 Mbps vs. 60 Mbps). Resource utilisation showed minimal differences, with WireGuard maintaining slightly better memory efficiency. Security Efficiency Index calculations revealed environment-specific trade-offs: WireGuard showed marginal advantages in Azure, while OpenVPN demonstrated better throughput efficiency in VMware, though WireGuard remained superior for latency-sensitive applications. Our findings indicate protocol selection should be guided by deployment environment and application requirements rather than general superiority claims. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

27 pages, 3770 KB  
Article
Precision Time Interval Generator Based on CMOS Counters and Integration with IoT Timing Systems
by Nebojša Andrijević, Zoran Lovreković, Vladan Radivojević, Svetlana Živković Radeta and Hadžib Salkić
Electronics 2025, 14(16), 3201; https://doi.org/10.3390/electronics14163201 - 12 Aug 2025
Viewed by 787
Abstract
Precise time interval generation is a cornerstone of modern measurement, automation, and distributed control systems, particularly within Internet of Things (IoT) architectures. This paper presents the design, implementation, and evaluation of a low-cost and high-precision time interval generator based on Complementary Metal-Oxide Semiconductor [...] Read more.
Precise time interval generation is a cornerstone of modern measurement, automation, and distributed control systems, particularly within Internet of Things (IoT) architectures. This paper presents the design, implementation, and evaluation of a low-cost and high-precision time interval generator based on Complementary Metal-Oxide Semiconductor (CMOS) logic counters (Integrated Circuit (IC) IC 7493 and IC 4017) and inverter-based crystal oscillators (IC 74LS04). The proposed system enables frequency division from 1 MHz down to 1 Hz through a cascade of binary and Johnson counters, enhanced with digitally controlled multiplexers for output signal selection. Unlike conventional timing systems relying on expensive Field-Programmable Gate Array (FPGA) or Global Navigation Satellite System (GNSS)-based synchronization, this approach offers a robust, locally controlled reference clock suitable for IoT nodes without network access. The hardware is integrated with Arduino and ESP32 microcontrollers via General-Purpose Input/Output (GPIO) level interfacing, supporting real-time timestamping, deterministic task execution, and microsecond-level synchronization. The system was validated through Python-based simulations incorporating Gaussian jitter models, as well as real-time experimental measurements using Arduino’s micros() function. Results demonstrated stable pulse generation with timing deviations consistently below ±3 µs across various frequency modes. A comparative analysis confirms the advantages of this CMOS-based timing solution over Real-Time Clock (RTC), Network Time Protocol (NTP), and Global Positioning System (GPS)-based methods in terms of local autonomy, cost, and integration simplicity. This work provides a practical and scalable time reference architecture for educational, industrial, and distributed applications, establishing a new bridge between classical digital circuit design and modern Internet of Things (IoT) timing requirements. Full article
(This article belongs to the Section Circuit and Signal Processing)
Show Figures

Figure 1

25 pages, 22731 KB  
Article
Scalable and Efficient GCL Scheduling for Time-Aware Shaping in Autonomous and Cyber-Physical Systems
by Chengwei Zhang and Yun Wang
Future Internet 2025, 17(8), 321; https://doi.org/10.3390/fi17080321 - 22 Jul 2025
Viewed by 467
Abstract
The evolution of the internet towards supporting time-critical applications, such as industrial cyber-physical systems (CPSs) and autonomous systems, has created an urgent demand for networks capable of providing deterministic, low-latency communication. Autonomous vehicles represent a particularly challenging use case within this domain, requiring [...] Read more.
The evolution of the internet towards supporting time-critical applications, such as industrial cyber-physical systems (CPSs) and autonomous systems, has created an urgent demand for networks capable of providing deterministic, low-latency communication. Autonomous vehicles represent a particularly challenging use case within this domain, requiring both reliability and determinism for massive data streams—a requirement that traditional Ethernet technologies cannot satisfy. This paper addresses this critical gap by proposing a comprehensive scheduling framework based on Time-Aware Shaping (TAS) within the Time-Sensitive Networking (TSN) standard. The framework features two key contributions: (1) a novel baseline scheduling algorithm that incorporates a sub-flow division mechanism to enhance schedulability for high-bandwidth streams, computing Gate Control Lists (GCLs) via an iterative SMT-based method; (2) a separate heuristic-based computation acceleration algorithm to enable fast, scalable GCL generation for large-scale networks. Through extensive simulations, the proposed baseline algorithm demonstrates a reduction in end-to-end latency of up to 59% compared to standard methods, with jitter controlled at the nanosecond level. The acceleration algorithm is shown to compute schedules for 200 data streams in approximately one second. The framework’s effectiveness is further validated on a real-world TSN hardware testbed, confirming its capability to achieve deterministic transmission with low latency and jitter in a physical environment. This work provides a practical and scalable solution for deploying deterministic communication in complex autonomous and cyber-physical systems. Full article
Show Figures

Figure 1

16 pages, 4741 KB  
Article
Plug-In Repetitive Control for Magnetic Bearings Based on Equivalent-Input-Disturbance
by Gang Huang, Bolong Liu, Songlin Yuan and Xinyi Shi
Eng 2025, 6(7), 141; https://doi.org/10.3390/eng6070141 - 28 Jun 2025
Viewed by 316
Abstract
The radial magnetic bearing system is an open-loop, unstable, strong nonlinear system with a high rotor speed, predisposition to jitter, and poor interference immunity. The system is subjected to the main interference generated by gravity, and rotor imbalance and sensor runout seriously affect [...] Read more.
The radial magnetic bearing system is an open-loop, unstable, strong nonlinear system with a high rotor speed, predisposition to jitter, and poor interference immunity. The system is subjected to the main interference generated by gravity, and rotor imbalance and sensor runout seriously affect the system’s rotor position control performance. A plug-in repetitive control method based on equivalent-input-disturbance (EID) is presented to address the issue of decreased control accuracy of the magnetic bearing system caused by disturbances from gravity, rotor imbalance, and sensor runout. First, a linearized model of the magnetic bearing rotor containing parameter fluctuations due to the eddy current effect and temperature rise effect is established, and a plug-in repetitive controller (PRC) is designed to enhance the rejection effect of periodic disturbances. Next, an EID system is introduced, and a Luenberger observer is used to estimate the state variables and disturbances of the system. The estimates of the EID are then used for feedforward compensation to address the issue of large overshoot in the system. Finally, simulations are conducted for comparison with the PID control method and PRC control method. The plug-in repetitive controller method assessed in this paper improves control performance by an average of 87.9% and 57.7% and reduces the amount of over-shooting by an average of 66.5% under various classes of disturbances, which proves the efficiency of the control method combining a plug-in repetitive controller with the EID theory. Full article
(This article belongs to the Section Electrical and Electronic Engineering)
Show Figures

Figure 1

19 pages, 4218 KB  
Article
A Multi-Deformable-Mirror 500 Hz Adaptive Optical System for Atmospheric Turbulence Simulation, Real-Time Reconstruction, and Wavefront Correction Using Bimorph and Tip-Tilt Correctors
by Ilya Galaktionov and Vladimir Toporovsky
Photonics 2025, 12(6), 592; https://doi.org/10.3390/photonics12060592 - 9 Jun 2025
Cited by 1 | Viewed by 1057
Abstract
Atmospheric turbulence introduces distortions to the wavefront of propagating optical radiation. It causes image resolution degradation in astronomical telescopes and significantly reduces the power density of radiation on the target in focusing applications. The impact of turbulence fluctuations on the wavefront can be [...] Read more.
Atmospheric turbulence introduces distortions to the wavefront of propagating optical radiation. It causes image resolution degradation in astronomical telescopes and significantly reduces the power density of radiation on the target in focusing applications. The impact of turbulence fluctuations on the wavefront can be investigated under laboratory conditions using either a fan heater (roughly tuned), a phase plate, or a deformable mirror (finely tuned) as a turbulence-generation device and a wavefront sensor as a wavefront-distortion measurement device. We designed and developed a software simulator and an experimental setup for the reconstruction of atmospheric turbulence-phase fluctuations as well as an adaptive optical system for the compensation of induced aberrations. Both systems use two 60 mm, 92-channel, bimorph deformable mirrors and two tip-tilt correctors. The wavefront is measured using a high-speed Shack–Hartmann wavefront sensor based on an industrial CMOS camera. The system was able to achieve a 500 Hz correction frame rate, and the amplitude of aberrations decreased from 2.6 μm to 0.3 μm during the correction procedure. The use of the tip-tilt corrector allowed a decrease in the focal spot centroid jitter range of 2–3 times from ±26.5 μm and ±24 μm up to ±11.5 μm and ±5.5 μm. Full article
(This article belongs to the Special Issue Optical Sensing Technologies, Devices and Their Data Applications)
Show Figures

Figure 1

22 pages, 18370 KB  
Article
Digital Domain TDI-CMOS Imaging Based on Minimum Search Domain Alignment
by Han Liu, Shuping Tao, Qinping Feng and Zongxuan Li
Sensors 2025, 25(11), 3490; https://doi.org/10.3390/s25113490 - 31 May 2025
Viewed by 872
Abstract
In this study, we propose a digital domain TDI-CMOS dynamic imaging method based on minimum search domain alignment, which consists of five steps: image-motion vector computation, image jitter estimation, feature pair matching, global displacement estimation, and TDI accumulation. To solve the challenge of [...] Read more.
In this study, we propose a digital domain TDI-CMOS dynamic imaging method based on minimum search domain alignment, which consists of five steps: image-motion vector computation, image jitter estimation, feature pair matching, global displacement estimation, and TDI accumulation. To solve the challenge of matching feature point pairs in dark and low-contrast images, our method first optimizes the size and position of the search box using an image motion compensation mathematical model and a satellite platform jitter model. Then, the feature point pairs that best match the extracted feature points of the reference frame are identified within the search box of the target frame. After that, a kernel density estimation algorithm is proposed for calculating the displacement probability density of each feature point pair to fit the actual displacement between two frames. Finally, we align and superimpose all the frames in the digital domain to generate a delayed integral image. Experimental results show that this method greatly improves the alignment speed and accuracy of dark and low-contrast images during dynamic imaging. It effectively mitigates the effects of image motion and jitter from the spatial camera, and the fitted global image motion error is kept below 0.01 pixels, which is compensated to improve the MTF coefficient of the image motion and jitter link to 0.68, thus improving the imaging quality of TDI. Full article
Show Figures

Figure 1

19 pages, 3395 KB  
Article
End-to-End Online Video Stitching and Stabilization Method Based on Unsupervised Deep Learning
by Pengyuan Wang, Pinle Qin, Rui Chai, Jianchao Zeng, Pengcheng Zhao, Zuojun Chen and Bingjie Han
Appl. Sci. 2025, 15(11), 5987; https://doi.org/10.3390/app15115987 - 26 May 2025
Viewed by 1088
Abstract
The limited field of view, cumulative inter-frame jitter, and dynamic parallax interference in handheld video stitching often lead to misalignment and distortion. In this paper, we propose an end-to-end, unsupervised deep-learning framework that jointly performs real-time video stabilization and stitching. First, collaborative optimization [...] Read more.
The limited field of view, cumulative inter-frame jitter, and dynamic parallax interference in handheld video stitching often lead to misalignment and distortion. In this paper, we propose an end-to-end, unsupervised deep-learning framework that jointly performs real-time video stabilization and stitching. First, collaborative optimization architecture allows the stabilization and stitching modules to share parameters and propagate errors through a fully differentiable network, ensuring consistent image alignment. Second, a Markov trajectory smoothing strategy in relative coordinates models inter-frame motion as incremental relationships, effectively reducing cumulative errors. Third, a dynamic attention mask generates spatiotemporal weight maps based on foreground motion prediction, suppressing misalignment caused by dynamic objects. Experimental evaluation on diverse handheld sequences shows that our method achieves higher stitching quality, lower geometric distortion rates, and improved video stability compared to state-of-the-art baselines, while maintaining real-time processing capabilities. Ablation studies validate that relative trajectory modeling substantially mitigates long-term jitter and that the dynamic attention mask enhances stitching accuracy in dynamic scenes. These results demonstrate that the proposed framework provides a robust solution for high-quality, real-time handheld video stitching. Full article
(This article belongs to the Collection Trends and Prospects in Multimedia)
Show Figures

Figure 1

23 pages, 2959 KB  
Article
Performance Analysis of MPT-GRE Multipath Networks Under Out-of-Order Packet Arrival
by Naseer Al-Imareen and Gábor Lencse
Electronics 2025, 14(11), 2138; https://doi.org/10.3390/electronics14112138 - 24 May 2025
Viewed by 755
Abstract
Network packets may arrive out of their original order due to network delays, transmission speed variations, congestion, or uneven resource distribution. These factors cause significant challenges to network performance. These challenges result in jitter, packet loss, and reduced throughput, negatively affecting the efficient [...] Read more.
Network packets may arrive out of their original order due to network delays, transmission speed variations, congestion, or uneven resource distribution. These factors cause significant challenges to network performance. These challenges result in jitter, packet loss, and reduced throughput, negatively affecting the efficient arrangement of packets. The Multipath tunnel-Generic Routing Encapsulation (MPT-GRE) architecture addresses this issue through a packet reordering mechanism designed for multipath GRE with User Datagram Protocol (UDP) encapsulation networks. This study investigates and analyses the path-specific delays, jitter, and transmission speed constraints to evaluate the influence of out-of-order packets on the MPT-GRE tunnel throughput aggregation capability. By comparing scenarios with and without the re-ordering mechanism, the results demonstrate that the reordering mechanism substantially improves the traffic throughput in symmetric and asymmetric channel configurations. Additionally, the study emphasizes the critical role of optimizing the reordering window parameter for effective performance. These findings confirm that packet reordering mechanisms significantly enhance MPT-GRE network performance by reducing the negative effects of delays and out-of-order arrivals. Full article
Show Figures

Figure 1

Back to TopTop