Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,237)

Search Parameters:
Keywords = critical data transmission

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 6711 KB  
Article
A Convolutional Autoencoder-Based Method for Vector Curve Data Compression
by Shuo Zhang, Pengcheng Liu, Hongran Ma and Mingwu Guo
ISPRS Int. J. Geo-Inf. 2026, 15(4), 164; https://doi.org/10.3390/ijgi15040164 (registering DOI) - 11 Apr 2026
Abstract
(1) Background: Curve data compression plays a critical role in efficient storage, transmission, and multi-scale visualization of vector spatial data, especially for complex geographic boundaries. Achieving high compression efficiency while preserving geometric fidelity remains a challenging task. (2) Methods: This study proposes a [...] Read more.
(1) Background: Curve data compression plays a critical role in efficient storage, transmission, and multi-scale visualization of vector spatial data, especially for complex geographic boundaries. Achieving high compression efficiency while preserving geometric fidelity remains a challenging task. (2) Methods: This study proposes a vector curve compression framework based on a convolutional autoencoder. Curve data are segmented and resampled to unify network input, after which coordinate-difference sequences are encoded into low-dimensional latent vectors through convolutional layers and reconstructed via a symmetric decoder. (3) Results: Experiments conducted on a global island boundary dataset demonstrate that the proposed method achieves effective data reduction with stable reconstruction accuracy. Specifically, compared with the classical Douglas–Peucker (DP) algorithm, Fourier series (FS) methods, and fully connected autoencoders (FCAs), the 1D CAE exhibits superior and more robust reconstruction performance, especially under high compression ratios. It achieves the lowest positional deviation (PD = 42.41) and the highest spatial fidelity (IoU = 0.9991, with a relative area error of only 0.0067%), while maintaining high computational efficiency (57.32 s). Sensitivity analyses reveal that a convolution kernel size of 1 × 7 and a segment length of 25 km yield the optimal trade-off between representational capacity and model stability. (4) Conclusions: The proposed method enables efficient vector curve compression and reliable coastline reconstruction, and is particularly suitable for small- and medium-scale cartographic applications up to a map scale of 1:250 K. Full article
Show Figures

Figure 1

25 pages, 8514 KB  
Article
Fatigue Life Evaluation and Structural Optimization of Rubber Damping Components in Metro Resilient Wheels
by Qiang Zhang, Zhiming Liu, Yiliang Shu, Guangxue Yang and Wenhan Deng
Polymers 2026, 18(8), 915; https://doi.org/10.3390/polym18080915 - 9 Apr 2026
Viewed by 190
Abstract
Resilient wheels are widely employed in metro vehicles to mitigate vibration and noise, in which rubber damping components play a critical role in load transmission and fatigue resistance. However, stress concentration and cyclic loading can significantly compromise their durability and service life. In [...] Read more.
Resilient wheels are widely employed in metro vehicles to mitigate vibration and noise, in which rubber damping components play a critical role in load transmission and fatigue resistance. However, stress concentration and cyclic loading can significantly compromise their durability and service life. In this study, the structural optimization and fatigue life of rubber damping components in resilient wheels are systematically investigated based on finite element analysis and in-service metro operational data. A three-dimensional finite element model incorporating hyperelastic material behavior is developed to evaluate stress distributions under three representative conditions: press-fit assembly, straight-line operation, and curved-track operation. Based on the resulting stress fields, critical high-stress regions within the rubber component are identified and selected as targets for structural optimization. The Design of Experiments (DOE) methodology, integrated with the Isight 2022 optimization platform, is employed to determine the optimal geometric parameters that minimize the von Mises equivalent stress. Furthermore, a fatigue life prediction framework is established using actual metro service mileage data. Fatigue performance is assessed using Fe-safe 2022 software in conjunction with rubber fatigue crack propagation theory, and the results before and after optimization are systematically compared. This study demonstrates that stress concentrations in resilient wheel rubber damping components predominantly occur at fillet transition regions, governed by load transfer characteristics under press-fitting and service conditions. Through DOE-based structural optimization, the critical geometric parameters are effectively refined, leading to a significant reduction in stress levels in key regions. As a result, the proposed approach markedly improves fatigue performance, extending the minimum fatigue life from 1300 days to 24,322 days, thereby substantially enhancing the durability and reliability of the resilient wheel system. Full article
(This article belongs to the Section Polymer Processing and Engineering)
Show Figures

Figure 1

41 pages, 84120 KB  
Article
DDS-over-TSN Framework for Time-Critical Applications in Industrial Metaverses
by Taemin Nam, Seongjin Yun and Won-Tae Kim
Appl. Sci. 2026, 16(8), 3641; https://doi.org/10.3390/app16083641 - 8 Apr 2026
Viewed by 163
Abstract
The industrial metaverse is a digital twin space that integrates the real world with virtual environments through bidirectional synchronization. It supports critical services, such as time-sensitive machine control and large-scale collaboration, which require Time-Sensitive Networking and scalable Data Distribution Services. DDS, developed by [...] Read more.
The industrial metaverse is a digital twin space that integrates the real world with virtual environments through bidirectional synchronization. It supports critical services, such as time-sensitive machine control and large-scale collaboration, which require Time-Sensitive Networking and scalable Data Distribution Services. DDS, developed by the Object Management Group, provides excellent scalability and diverse QoS policies but struggles to guarantee transmission delay and jitter for time-critical applications. TSN, based on IEEE 802.1 standards, addresses these challenges by ensuring time-criticality. However, current research lacks comprehensive integration mechanisms for DDS and TSN, particularly from the viewpoints of semantics and system framework. Additionally, there is no adaptive QoS mapping converting the abstract DDS QoS policies to the sophisticated TSN QoS parameters. This paper presents a novel DDS-over-TSN framework that incorporates three key functions to address these challenges. First, Cross-layer QoS Mapping automates correspondences between DDS and TSN parameters, deriving technical constraints from standard documentation through retrieval-augmented generation. Second, Semantic Priority Estimation extracts substantial priority levels by utilizing language model embedding vectors as high-dimensional feature extractors. Third, Adaptive Resource Allocation performs dynamic bandwidth distribution for each priority level through reinforcement learning. Simulation results reveal over 99% mapping accuracy and 97% consistency in priority extraction. The applied Deep Reinforcement Learning paradigm allocated 99% of required resources to high-priority classes and reduced resource wastage by 15% compared to conventional methods. This methodology meets industrial requirements by ensuring both deterministic real-time performance and efficient resource isolation. Full article
(This article belongs to the Special Issue Digital Twin and IoT, 2nd Edition)
Show Figures

Figure 1

30 pages, 1724 KB  
Article
Real-Time Data Transmission and Drilling Performance: Analyses Including Data Propagation Agility in Boreholes, Drilling Parameters and Information Transmission Through MPT Systems
by Andreas Nascimento, Gustavo Henrique Romeu da Silva, Diunay Zuliani Mantegazini, Matthias Reich and Fernando G. Martins
Data 2026, 11(4), 79; https://doi.org/10.3390/data11040079 - 8 Apr 2026
Viewed by 126
Abstract
This research-related study examines the relevance of mud pulse telemetry (MPT) systems and their intersection with drilling performance, focusing on data transmission signal propagation performance and overall operation under different drilling parameters conditions, with an additional focus on drilling fluid flow rate and [...] Read more.
This research-related study examines the relevance of mud pulse telemetry (MPT) systems and their intersection with drilling performance, focusing on data transmission signal propagation performance and overall operation under different drilling parameters conditions, with an additional focus on drilling fluid flow rate and downhole pressure conditions. The novelty of this study lies in the investigation of adjustments to drilling operating parameters that could potentially improve the transmission of telemetry signals during drilling, in real time, without requiring mechanical or functional modifications to the MPT system itself. Improvements on transmission performance in situations where the data rate may be limited are also addressed, presenting an alternative through possible propagation velocity improvements to counterbalance it. A detailed chronological technical scientific literature review details important parts on analyses of pressure pulse propagation velocities focused on data transmission. A systematic experimental approach was developed and put into practice to evaluate the MPT systems in regard to tendencies on transmission performances, emphasizing pressure pulse propagation velocity. The laboratory-scale experiments were conducted at the Institute of Drilling Engineering and Fluid Mining (IBF) from the Technical University Bergakademie Freiberg (TUBAF), namely the Flow-loop Research Facility, to assess the impact of fluid flow rate (and subsequent pressure) on data transmission efficiency. Experimental results demonstrate that increasing the flow rate significantly speeds up signal propagation. In the performed experiments, for the mud siren configuration, increasing the flow rate from 15 to 25 m3/h improved the data transmission performance by approximately, at minimum, 18%, while for the positive mud pulse system, an increase in flow rate from 11.5 to 14 m3/h resulted in a propagation velocity rise of about 19%. The results also showed that higher concentrations of glycerin in the working fluid reduced the propagation velocity, confirming the influence of the fluid’s rheological properties on telemetry performance. At the end, in the presented case study, for 6 bps data rate configurations and for a transmission of a 40-bit string, it was demonstrated that the propagation time from downhole to the surface could potentially represent approximately 40% of the total time demanded for transmitting the desired information (generation plus propagation time). It was verified that an increment of 0.02208 m3/s (350 gpm) could lead to shortening eventual surveying procedures by 1–2 s, and that it could equally represent 1.137 bps. This is a relevant outcome, since, without any physical or functional alteration to the MPT system, one could have the data transmission performance improved, an approach not yet analyzed in the literature nor at the industrial park. These results, added to the detailed literature investigation and interaction with field personnel, indicate that the drilling fluid flow rate is a critical operational parameter affecting both the telemetry signal transmission speed and the overall drilling efficiency. Increasing the flow rate can reduce survey transmission time and decrease operational exposure to drilling hazards, such as drill string sticking. The results provide quantitative information applicable in optimizing measurement-drilling telemetry and help support the development of integrated drilling optimization strategies that balance drilling performance with real-time data transmission assurance in deep drilling operations. Full article
Show Figures

Figure 1

25 pages, 4570 KB  
Article
Digital Twin Framework for Struvctural Health Monitoring of Transmission Towers: Integrating BIM, IoT and FEM for Wind–Flood Multi-Hazard Simulation
by Xiaoqing Qi, Huaichao Wang, Xiaoyu Xiong, Anqi Zhou, Qing Sun and Qiang Zhang
Appl. Sci. 2026, 16(8), 3620; https://doi.org/10.3390/app16083620 - 8 Apr 2026
Viewed by 146
Abstract
Transmission towers, as critical infrastructure in power systems, are frequently threatened by multiple hazards such as strong winds and flood scour. Traditional structural health monitoring methods face limitations in data feedback timeliness and mechanical interpretation, making real-time condition awareness and early warning under [...] Read more.
Transmission towers, as critical infrastructure in power systems, are frequently threatened by multiple hazards such as strong winds and flood scour. Traditional structural health monitoring methods face limitations in data feedback timeliness and mechanical interpretation, making real-time condition awareness and early warning under disaster scenarios challenging. To address these issues, this paper proposes a digital twin framework for transmission tower structures, integrating Building Information Modeling (BIM), Internet of Things (IoT) technology, and the Finite Element Method (FEM) for structural health monitoring and visual warning under wind loads and flood scour effects. The framework achieves cross-platform collaboration through the FEM Open Application Programming Interface (OAPI) and Python scripts. In the physical domain, fluctuating wind loads are simulated based on the Davenport spectrum, flood scour depth is modeled using the HEC-18 formulation, and foundation constraint degradation is represented through nonlinear spring stiffness reduction. In the FEM domain, dynamic time-history analyses are conducted to obtain structural responses. In the BIM domain, a three-level warning mechanism based on stress change rate (ΔR) is established to achieve intuitive rendering and dynamic feedback of structural damage. A 44.4 m high latticed angle steel tower is employed as the case study for validation. Results demonstrate that the simulated wind spectrum closely matches the theoretical target spectrum, confirming the validity of the load input. A critical scour evolution threshold of 40% is identified, beyond which the first two natural frequencies exhibit nonlinear decay with a maximum reduction of 80.9%. Non-uniform scour induces significant load transfer, with axial forces at leeside nodes increasing from 27 kN to 54 kN. During the 0–60 s wind loading process, BIM visualization accurately captures the full stress evolution from the tower base to the upper structure, showing excellent agreement with FEM results. The proposed framework establishes a closed-loop interaction mechanism of “physical sensing–digital simulation–visual warning”, effectively enhancing the timeliness and interpretability of structural health monitoring for transmission towers under multiple hazards, providing an innovative approach for intelligent disaster prevention in power infrastructure. Full article
(This article belongs to the Section Civil Engineering)
Show Figures

Figure 1

19 pages, 7072 KB  
Article
Research on Tail Rotor Load Test Flight Technology for Helicopters Based on Strain Sensor Measurement
by Shuaike Jiao, Jiahong Zheng, Kang Li and Xiaoqing Hu
Sensors 2026, 26(8), 2287; https://doi.org/10.3390/s26082287 - 8 Apr 2026
Viewed by 143
Abstract
The load characteristics of the helicopter tail rotor system are critical to flight safety and handling performance, and flight testing remains the most direct and reliable means to obtain authentic load data. In this paper, the well-established Wheatstone bridge strain measurement method is [...] Read more.
The load characteristics of the helicopter tail rotor system are critical to flight safety and handling performance, and flight testing remains the most direct and reliable means to obtain authentic load data. In this paper, the well-established Wheatstone bridge strain measurement method is adopted to carry out accurate load testing on the helicopter tail rotor system. The tail rotor assembly mainly consists of the tail rotor shaft, pitch link, and tail rotor blades, which undertake different load transfer tasks during flight. Under actual operating conditions, the tail rotor shaft bears significant axial tension as well as combined lateral and vertical bending moments; the pitch link is primarily subjected to alternating axial tension and compression; and the tail rotor blades withstand complex loads including flapping bending, lagwise bending, and torsional moments. According to the distinct stress characteristics and force transmission paths of each component, targeted flight test maneuvers are reasonably designed. These maneuvers include steady-level flight at low, medium, and high speeds, zigzag climbing flight, near-ground side-rear flight, as well as deceleration-to-sprint and obstacle slope maneuvers specified in ADS-33E. Key flight parameters are selected for in-depth analysis to reveal the load distribution and dynamic variation patterns of the tail rotor under typical operating conditions. On this basis, a helicopter load risk test point matrix is established to identify high-risk working conditions and key monitoring positions. This study provides a solid theoretical and data foundation for subsequent flight test monitoring and structural strength verification. It effectively reduces flight test risks, improves monitoring efficiency and accuracy, and helps cut down the human, material, and financial costs associated with flight test monitoring. The research results can also provide important references for the design optimization and safety evaluation of helicopter tail rotor systems. Full article
(This article belongs to the Collection Sensors and Sensing Technology for Industry 4.0)
Show Figures

Figure 1

20 pages, 1504 KB  
Article
Decision-Support Framework for Cybersecurity Risk Assessment in EV Charging Infrastructure
by Roberts Grants, Nadezhda Kunicina, Rasa Brūzgienė, Šarūnas Grigaliūnas and Andrejs Romanovs
Energies 2026, 19(8), 1814; https://doi.org/10.3390/en19081814 - 8 Apr 2026
Viewed by 186
Abstract
Rapid expansion of electric vehicle adoption has led to increased dependence on a charging infrastructure that is tightly integrated with energy distribution systems and digital communication networks. As electric vehicle charging stations evolve into complex cyber–physical systems, cybersecurity risks pose a growing threat [...] Read more.
Rapid expansion of electric vehicle adoption has led to increased dependence on a charging infrastructure that is tightly integrated with energy distribution systems and digital communication networks. As electric vehicle charging stations evolve into complex cyber–physical systems, cybersecurity risks pose a growing threat to grid reliability and user trust. This paper presents a hybrid decision-support framework for cybersecurity risk assessment in EV charging infrastructure that advances beyond prior multi-criteria decision-making approaches by combining interpretability with data-driven validation. Specifically, the framework integrates the Analytic Hierarchy Process (AHP) for expert-driven weighting of cybersecurity attributes with PROMETHEE for flexible threat prioritization, enabling transparent and auditable risk rankings. The framework categorizes cybersecurity criteria across four infrastructure layers—transmission, distribution, consumer, and electric vehicle charging stations—and assigns relative weights through expert-driven pairwise comparisons. PROMETHEE is then applied to rank potential cyber threats based on these weights, allowing for flexible prioritization of cybersecurity interventions. The methodology is validated using the real-world WUSTL-IIoT-2018 SCADA dataset, which includes simulated reconnaissance (network scanning), device identification, and exploitation attacks. While this dataset does not natively include OCPP 2.0 or ISO 15118 protocols, the experimental results demonstrate strong discrimination power (AUC = 0.99, recall = 95%) and provide a basis for extension to modern EVSE communication standards. The results identify critical metrics such as anomalous source packet behavior and encryption reliability as key vulnerability markers, aligning with documented EV charging attack scenarios. By bridging expert judgment with empirical traffic data, the proposed framework offers both technical robustness and explainability, supporting grid operators, SOC teams, and infrastructure planners in systematically assessing risks, allocating resources, and enhancing the resilience of EV charging ecosystems against evolving cyber threats. Full article
Show Figures

Figure 1

27 pages, 1073 KB  
Article
An MMSE-Optimized Pre-Rake Receiver with a Comparative Analysis of Channel Estimation Methods for Multipath Channels
by Aoba Morimoto, Jaesang Cha, Incheol Jeong and Chang-Jun Ahn
Electronics 2026, 15(7), 1540; https://doi.org/10.3390/electronics15071540 - 7 Apr 2026
Viewed by 109
Abstract
In Time Division Duplex (TDD) Direct-Sequence Code Division Multiple Access (DS/CDMA) architectures, Pre-Rake filtering serves as a powerful transmitter-side strategy to alleviate receiver hardware constraints by leveraging channel reciprocity. Nevertheless, rapid channel fluctuations induced by high Doppler spreads critically undermine this reciprocity assumption. [...] Read more.
In Time Division Duplex (TDD) Direct-Sequence Code Division Multiple Access (DS/CDMA) architectures, Pre-Rake filtering serves as a powerful transmitter-side strategy to alleviate receiver hardware constraints by leveraging channel reciprocity. Nevertheless, rapid channel fluctuations induced by high Doppler spreads critically undermine this reciprocity assumption. This failure is primarily driven by the unavoidable latency between uplink reception and downlink transmission, leading to severe performance deterioration. To address these challenges and enhance system robustness in modern high-speed scenarios, we propose an improved hybrid transceiver architecture. This scheme integrates multiplexed Pre-Rake processing with a Matched Filter-based Rake receiver and employs a Minimum Mean Square Error (MMSE) equalizer to suppress the severe Inter-Symbol Interference (ISI) and Multi-User Interference (MUI). Furthermore, we conduct a comparative analysis of channel estimation methods tailored for a 10 Mbps high-speed transmission environment.Our investigation reveals that while complex quadratic interpolation is often prioritized in low-data-rate studies, simple averaging is sufficient and even superior in high-speed communications. This is because the shortened slot duration allows simple averaging to effectively track channel variations while avoiding the noise overfitting associated with higher-order interpolation. The simulation results demonstrate that the proposed MMSE-optimized architecture achieves superior Bit Error Rate (BER) performance, providing a practical and computationally efficient solution for next-generation mobile networks. Full article
(This article belongs to the Section Microwave and Wireless Communications)
Show Figures

Figure 1

18 pages, 835 KB  
Article
Prism-Based Mapping of 6G Use Cases Integrating Technical Requirements and Multidimensional Service Classification
by Sunhye Kim, Yoon Seo, Seung-Hoon Hwang and Byungun Yoon
Systems 2026, 14(4), 404; https://doi.org/10.3390/systems14040404 - 7 Apr 2026
Viewed by 218
Abstract
Purpose: With the advent of sixth-generation (6G) communication technology, systematic mapping of its use cases to associated technical requirements has become essential for accelerating standardization, guiding R&D investment, and informing policy formulation. Methods: This study consolidated 65 use case scenarios from key academic [...] Read more.
Purpose: With the advent of sixth-generation (6G) communication technology, systematic mapping of its use cases to associated technical requirements has become essential for accelerating standardization, guiding R&D investment, and informing policy formulation. Methods: This study consolidated 65 use case scenarios from key academic and institutional 6G sources into 21 representative cases. A three-round Delphi-based expert assessment, employing a five-point Likert scale and interquartile-range-based consensus monitoring, was used to assign primary and secondary technical requirements across six core dimensions: immersive communication, massive communication, hyper-reliable low-latency communication, integrated sensing and communication, integrated artificial intelligence and communication (IAAC), and ubiquitous connectivity. A three-dimensional (3D) prism-based visualization framework was subsequently developed to represent the interdependencies among these requirements. Results: IAAC and massive communication emerged as the most critical requirements, each functioning as a primary or secondary driver across most use cases. The prism framework revealed hierarchical and complementary relationships among the six dimensions that conventional 2D wheel diagrams cannot adequately capture. Furthermore, a nine-criterion multidimensional classification framework, encompassing data transmission mode, decision-making mode, communication flow, interaction type, device type, deployment type, human activity innovation, user type, and personalization level, was developed, offering industry-specific guidance for service design. Collectively, the proposed framework supports user-centric design, informs strategic technology planning, and contributes to policy development while acknowledging existing limitations in quantitative mapping and economic analysis. Full article
Show Figures

Figure 1

15 pages, 7541 KB  
Article
Two Compact T-Coil-Based Topologies for Wideband Four-Way Power Division in Ka-Band
by Qianran Zhang, Weiqing Wang, Fangkai Wang, Xudong Wang and Pufeng Chen
Electronics 2026, 15(7), 1521; https://doi.org/10.3390/electronics15071521 - 4 Apr 2026
Viewed by 242
Abstract
This paper presents two broadband four-way power dividers based on a novel T-coil topology, operating in the 22–32 GHz band (covering the K/Ka bands). Type I adopts a cascaded power division structure, while Type II employs a direct-feed integrated architecture. The innovation lies [...] Read more.
This paper presents two broadband four-way power dividers based on a novel T-coil topology, operating in the 22–32 GHz band (covering the K/Ka bands). Type I adopts a cascaded power division structure, while Type II employs a direct-feed integrated architecture. The innovation lies in the introduction of isolating capacitors at the input and output ports, which significantly shortens the critical transmission line lengths in both topologies. This effectively reduces the equivalent inductance and raises the self-resonant frequency, achieving wideband response while maintaining structural simplicity, compact size, and ease of integration. Both circuits were fabricated using a standard 45 nm CMOS process. The measured core chip areas (excluding pads) are only 0.125 mm2 for Type I and 0.066 mm2 for Type II, demonstrating excellent integration density. Through even-mode and odd-mode theoretical analysis and full-wave electromagnetic simulation verification, both power dividers exhibit good impedance matching and port isolation across the target frequency band. Measurement results further confirm their performance: across the entire 22–32 GHz band, both power dividers achieve a return loss better than 11 dB and isolation exceeding 15 dB; the insertion loss is 1.1–1.4 dB for Type I and 0.8–1.3 dB for Type II; the amplitude imbalance is below ±0.3 dB and ±0.1 dB, respectively; and the phase imbalance is less than ±5° and ±3°, respectively. All measured data show good agreement with simulation results. In summary, Type I offers advantages in layout flexibility and isolation performance, while Type II excels in insertion loss and chip size. Both provide practical circuit solutions for broadband, high-performance, and compact power division systems. Full article
Show Figures

Figure 1

22 pages, 3947 KB  
Article
A Methodology for Testing the Size and the Location of Battery Energy Storage Systems on Transmission Grids
by Nicola Collura, Fabio Massaro, Enrica Di Mambro, Salvatore Paradiso and Francesco Montana
Electricity 2026, 7(2), 35; https://doi.org/10.3390/electricity7020035 - 4 Apr 2026
Viewed by 200
Abstract
A replicable methodology for testing the size and placement of Battery Energy Storage Systems connected to high-voltage transmission networks is presented in this study. The proposed approach involves the power flow analysis inside a Renewable Energy Zone, namely a high-renewable area prone to [...] Read more.
A replicable methodology for testing the size and placement of Battery Energy Storage Systems connected to high-voltage transmission networks is presented in this study. The proposed approach involves the power flow analysis inside a Renewable Energy Zone, namely a high-renewable area prone to grid congestion during peak generation periods, based on time-series hourly analysis over a critical month. The model includes detailed operational descriptions such as lines ampacity, battery state of charge limits, round-trip efficiency, self-discharge behavior, and ramp rate restrictions. The methodology distinguishes itself by its simplicity, flexibility, and use of open-source tools, making it a valuable asset for supporting future transmission planning in high-renewable-energy scenarios. The model was developed in Python (version 3.12) using the open-source Pandapower library, introducing an innovative constraint management criterion, and validated against real data provided by the national Transmission System Operator. The approach was then applied to a portion of the Sicilian grid with massive wind and solar penetration. Results show that strategic allocation of batteries leads to a significant reduction in line overloads (up to 13 GWh mitigated in one month), improves the dispatch of renewable energy generated within the Renewable Energy Zone and allows a more sustainable exercise of the power system. Full article
(This article belongs to the Special Issue Feature Papers to Celebrate the First Impact Factor of Electricity)
Show Figures

Figure 1

21 pages, 1172 KB  
Article
An Examination of LPWAN Security in Maritime Applications
by Zachary Larkin and Chuck Easttom
J. Cybersecur. Priv. 2026, 6(2), 65; https://doi.org/10.3390/jcp6020065 - 3 Apr 2026
Viewed by 245
Abstract
LoRaWAN’s role in global maritime logistics has allowed for efficient monitoring of ships and cargo, but it also comes with critical cybersecurity vulnerabilities. Experimental validation of three attack vectors—replay attacks, narrowband jamming and metadata inference—is conducted using a reproducible digital-twin LoRaWAN dataset reflecting [...] Read more.
LoRaWAN’s role in global maritime logistics has allowed for efficient monitoring of ships and cargo, but it also comes with critical cybersecurity vulnerabilities. Experimental validation of three attack vectors—replay attacks, narrowband jamming and metadata inference—is conducted using a reproducible digital-twin LoRaWAN dataset reflecting Rotterdam port-like operational patterns (N = 20,000 baseline transmissions). Using controlled simulations and Kolmogorov–Smirnov statistical analysis, we show that: (1) replay attacks are feasible under Activation by Personalization (ABP) configurations lacking enforced frame-counter validation and exhibit no univariate separation from legitimate traffic under Kolmogorov–Smirnov analysis (p > 0.46 for all evaluated radio features); (2) narrowband jamming leads to significant SNR degradation (p = 2.36 × 10−5) on targeted channels without inducing broad distributional anomalies across other radio features; and (3) metadata-only analysis supports elevated metadata-based re-identification susceptibility (median Rd=0.834), indicating high predictability under passive observation which can reveal operationally relevant signals even when AES-128 is employed. Our proposed layered mitigation framework consists of mandatory Over-the-Air Activation (OTAA), cryptographic key rotation, channel diversity incorporating Adaptive Data Rate (ADR), gateway hardening, and protocol-level enforcement considerations, customized for maritime LPWAN scenarios. We provide experiment-backed evidence and actionable recommendations to connect academic LPWAN security research to that of industrial maritime practice. Full article
(This article belongs to the Special Issue Building Community of Good Practice in Cybersecurity)
Show Figures

Figure 1

16 pages, 4249 KB  
Article
Analysis Method for the Grid at the Sending End of Renewable Energy Scale Effect Under Typical AC/DC Transmission Scenarios
by Zheng Shi, Yonghao Zhang, Yao Wang, Yan Liang, Jiaojiao Deng and Jie Chen
Electronics 2026, 15(7), 1382; https://doi.org/10.3390/electronics15071382 - 26 Mar 2026
Viewed by 285
Abstract
In the context of the coordinated development of high-proportion renewable energy integration and alternating current/direct current (AC/DC) hybrid transmission, the sending-end power grid faces challenges such as decreased system strength, contracted stability boundaries, and difficulties in covering high-risk operating conditions. This paper proposes [...] Read more.
In the context of the coordinated development of high-proportion renewable energy integration and alternating current/direct current (AC/DC) hybrid transmission, the sending-end power grid faces challenges such as decreased system strength, contracted stability boundaries, and difficulties in covering high-risk operating conditions. This paper proposes a new renewable energy scale impact analysis method that integrates “typical scenario construction-scale ladder comparison–prediction-driven time series injection” in response to the operational constraints of AC/DC transmission. In terms of method implementation, firstly, a two-layer typical scenario system is constructed under unified transmission constraints and fixed grid boundaries: A regular benchmark scenario covers the main operating range, and a set of high-risk scenarios near the boundaries is obtained through multi-objective intelligent search, which is then refined through clustering to form a computable stress-test scenario library. Here, the boundary scenarios are generated by a multi-objective search that simultaneously drives multiple key section load rates towards their limits, subject to AC power-flow feasibility and operational constraints, and the resulting Pareto candidates are reduced into a compact stress-test library by clustering. Secondly, a ladder scenario with increasing renewable energy scale is constructed, and cross-scale comparisons are carried out within the same scenario system to extract the scale effect and critical laws of key safety indicators. Finally, data resampling and Gated Recurrent Unit multi-step prediction are introduced to generate wind power output time series, enabling the temporal mapping of prediction results to scenario injection quantities, and constructing a closed-loop input interface of “prediction–scenario–grid indicators”. The results demonstrate that the proposed hierarchical framework, under unified AC/DC export constraints, can effectively construct a compact stress-test scenario library with enhanced boundary-risk coverage and can reveal how transient voltage security evolves across renewable expansion scales. By coupling boundary-oriented scenario construction, cross-scale comparable assessment, and forecasting-driven time series injection, the framework improves engineering interpretability and practical applicability compared with conventional scenario sampling/reduction workflows. For the forecasting module, the Gated Recurrent Unit (GRU) model achieves MAPE = 8.58% and RMSE = 104.32 kW on the test set, outperforming Linear Regression (LR)/Random Forest (RF)/Support Vector Regression (SVR) in multi-step ahead prediction. Full article
(This article belongs to the Special Issue Applications of Computational Intelligence, 3rd Edition)
Show Figures

Figure 1

30 pages, 322 KB  
Article
Resource Misallocation, Digital Economy and the Sustainability of Innovation Capacity: Mechanisms, Empirical Tests and China’s Experience
by Jia Guo, Ying-Kai Yin and Xiong-Wei He
Sustainability 2026, 18(7), 3232; https://doi.org/10.3390/su18073232 - 26 Mar 2026
Viewed by 289
Abstract
Against the backdrop of the United Nations Sustainable Development Goals (SDGs), innovation-driven development serves as the core engine of long-term sustainable economic development, while resource misallocation has emerged as a critical bottleneck constraining sustainable innovation and coordinated regional development. Grounded in the neoclassical [...] Read more.
Against the backdrop of the United Nations Sustainable Development Goals (SDGs), innovation-driven development serves as the core engine of long-term sustainable economic development, while resource misallocation has emerged as a critical bottleneck constraining sustainable innovation and coordinated regional development. Grounded in the neoclassical theory of factor allocation, this paper incorporates capital misallocation, labor misallocation and the digital economy into a unified analytical framework. Using China’s provincial panel data spanning 2001 to 2024, we systematically investigate the impact effects, underlying transmission mechanisms and regional heterogeneity of resource misallocation and the digital economy on scientific and technological innovation through benchmark regression, robustness tests and heterogeneity analysis. The results show that resource misallocation exerts a significant and robust inhibitory effect on technological innovation, with the inhibitory effect of capital misallocation being more pronounced than that of labor misallocation. The digital economy has a significant positive driving effect on technological innovation, and it can also indirectly boost technological innovation by alleviating resource misallocation, with its mitigating effect on resource misallocation presenting obvious structural differences and a stronger correction effect on capital misallocation than on labor misallocation. Economic growth and technological innovation form a mutually reinforcing “growth-innovation” virtuous cycle. In addition, the innovation effects of both resource misallocation and the digital economy exhibit significant regional heterogeneity, where the digital economy’s innovation-driven effect and misallocation-mitigating effect are notably stronger in eastern China than in the central and western regions. The theoretical contribution of this paper lies in constructing a transmission mechanism framework of “digital economy to resource misallocation to technological innovation”, which enriches the connotations of factor allocation and innovation theories. Its practical value is to provide policymakers with differentiated development paths for the digital economy and optimization strategies for factor allocation, thus facilitating the effective implementation of the innovation-driven development strategy. Full article
26 pages, 19175 KB  
Article
Molecular Phylogeny of the Genus Cymbosellaphora (Bacillariophyceae, Cymbellales): Evolutionary Significance of Areolae Morphology vs. Structure of Pore Occlusions
by Andrei Mironov, Anton Glushchenko, Natalia Tseplik, Yevhen Maltsev, Sergei Genkal and Maxim Kulikovskiy
Phycology 2026, 6(2), 34; https://doi.org/10.3390/phycology6020034 - 25 Mar 2026
Viewed by 245
Abstract
This is an investigation of molecular phylogeny and morphology of the genus Cymbosellaphora (Bacillariophyceae, Cymbellales). For this study, a strain of Cymbosellaphora geisslerae isolated from the Plotnikova River (Kamchatka Territory, Russia) was studied using light, scanning, and transmission electron microscopy, as well as [...] Read more.
This is an investigation of molecular phylogeny and morphology of the genus Cymbosellaphora (Bacillariophyceae, Cymbellales). For this study, a strain of Cymbosellaphora geisslerae isolated from the Plotnikova River (Kamchatka Territory, Russia) was studied using light, scanning, and transmission electron microscopy, as well as molecular methods. Phylogenetic analysis based on 18S rDNA and rbcL gene sequences revealed that Cymbosellaphora geisslerae belongs to the order Cymbellales and forms an alliance with representatives of genera Gomphonella and Reimeria. The results of molecular study are supported by morphology. In the course of molecular analysis, we discuss the diversity of valve morphology across Cymbosellaphora, Gomphonella, Reimeria and related genera. As a result, a new type of pore occlusions, typical for Cymbosellaphora, is proposed, the diagnoses of the genus Cymbosellaphora and the species Cymbosellaphora geisslerae are emended, and the epitypification of this species is made. Most importantly, our data indicates that the concepts of areolae morphology and pore occlusions structure in the order Cymbellales might require critical evaluation. Full article
Show Figures

Figure 1

Back to TopTop