Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (116)

Search Parameters:
Keywords = stochastic block model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 6914 KB  
Article
Deep Learning-Based Inverse Design of Stochastic-Topology Metamaterials for Radar Cross Section Reduction
by Chao Zhang, Chunrong Zou, Shaojun Guo, Yanwen Zhao and Tongsheng Shen
Materials 2025, 18(21), 4841; https://doi.org/10.3390/ma18214841 - 23 Oct 2025
Viewed by 18
Abstract
Electromagnetic (EM) metamaterials have a wide range of applications due to their unique properties, but their design is often based on specific topological structures, which come with certain limitations. Designing with stochastic topologies can provide more diverse EM properties. However, this requires experienced [...] Read more.
Electromagnetic (EM) metamaterials have a wide range of applications due to their unique properties, but their design is often based on specific topological structures, which come with certain limitations. Designing with stochastic topologies can provide more diverse EM properties. However, this requires experienced designers to search and optimise in a vast design space, which is time-consuming and requires substantial computational resources. In this paper, we employ a deep learning network agent model to replace time-consuming full-wave simulations and quickly establish the mapping relationship between the metamaterial structure and its electromagnetic response. The proposed framework integrates a Convolutional Block Attention Module-enhanced Variational Autoencoder (CBAM-VAE) with a Transformer-based predictor. Incorporating CBAM into the VAE architecture significantly enhances the model’s capacity to extract and reconstruct critical structural features of metamaterials. The Transformer predictor utilises an encoder-only configuration that leverages the sequential data characteristics, enabling accurate prediction of electromagnetic responses from latent variables while significantly enhancing computational efficiency. The dataset is randomly generated based on the filling rate of unit cells, requiring only a small fraction of samples compared to the full design space for training. We employ the trained model for the inverse design of metamaterials, enabling the rapid generation of two cells for 1-bit coding metamaterials. Compared to a similarly sized metallic plate, the designed coding metamaterial radar cross-section (RCS) reduces by over 10 dB from 6 to 18 GHz. Simulation and experimental measurement results validate the reliability of this design approach, providing a novel perspective for the design of EM metamaterials. Full article
(This article belongs to the Section Materials Simulation and Design)
Show Figures

Figure 1

10 pages, 635 KB  
Article
Impact of Homophily in Adherence to Anti-Epidemic Measures on the Spread of Infectious Diseases in Social Networks
by Piotr Bentkowski and Tomasz Gubiec
Entropy 2025, 27(10), 1071; https://doi.org/10.3390/e27101071 - 15 Oct 2025
Viewed by 205
Abstract
We investigate how homophily in adherence to anti-epidemic measures affects the final size of epidemics in social networks. Using a modified SIR model, we divide agents into two behavioral groups—compliant and non-compliant—and introduce transmission probabilities that depend asymmetrically on the behavior of both [...] Read more.
We investigate how homophily in adherence to anti-epidemic measures affects the final size of epidemics in social networks. Using a modified SIR model, we divide agents into two behavioral groups—compliant and non-compliant—and introduce transmission probabilities that depend asymmetrically on the behavior of both the infected and susceptible individuals. We simulate epidemic dynamics on two types of synthetic networks with tunable inter-group connection probability: stochastic block models (SBM) and networks with triadic closure (TC) that better capture local clustering. Our main result reveals a counterintuitive effect: under conditions where compliant infected agents significantly reduce transmission, increasing the separation between groups may lead to a higher fraction of infections in the compliant population. This paradoxical outcome emerges only in networks with clustering (TC), not in SBM, suggesting that local network structure plays a crucial role. These findings highlight that increasing group separation does not always confer protection, especially when behavioral traits amplify within-group transmission. Full article
(This article belongs to the Special Issue Spreading Dynamics in Complex Networks)
Show Figures

Figure 1

22 pages, 4003 KB  
Article
Numerical Modelling of Rock Fragmentation in Landslide Propagation: A Test Case
by Claudia Zito, Massimo Mangifesta, Mirko Francioni, Luigi Guerriero, Diego Di Martire, Domenico Calcaterra, Corrado Cencetti, Antonio Pasculli and Nicola Sciarra
Geosciences 2025, 15(9), 354; https://doi.org/10.3390/geosciences15090354 - 7 Sep 2025
Viewed by 559
Abstract
Landslides and rockfalls can negatively impact human activities and cause radical changes to the surrounding environment. For example, they can destroy entire buildings and roadway infrastructure, block waterways and create sudden dams, resulting in upstream flooding and increased flood risk downstream. In extreme [...] Read more.
Landslides and rockfalls can negatively impact human activities and cause radical changes to the surrounding environment. For example, they can destroy entire buildings and roadway infrastructure, block waterways and create sudden dams, resulting in upstream flooding and increased flood risk downstream. In extreme cases, they can even cause loss of life. External factors such as weathering, vegetation and mechanical stress alterations play a decisive role in their evolution. These actions can reduce strength, which can have an adverse impact on the slope’s ability to withstand failure. For rockfalls, this process also affects fragmentation, creating variations in the size, shape and volume of detached blocks, which influences propagation and impact on the slope. In this context, the Morino-Rendinara landslide is a clear example of rockfall propagation influenced by fragmentation. In this case, fragmentation results from tectonic stresses acting on the materials as well as specific climatic conditions affecting rock mass properties. This study explores how different fragmentation scales influence both velocity and landslide propagation along the slope. Using numerical models, based on lumped mass approach and stochastic analyses, various scenarios of rock material fracturing were examined and their impact on runout was assessed. Different scenarios were defined, varying only the fragmentation degree and different random seed sets at the beginning of simulations, carried out using the Rock-GIS tool. The results suggest that rock masses with high fracturing show reduced cohesion along joints and cracks, which significantly lowers their shear strength and makes them more prone to failure. Increased fragmentation further decreases the bonding between rock blocks, thereby accelerating landslide propagation. Conversely, less fragmented rocks retain higher resistance, which limits the extent of movement. These processes are influenced by uncertainties related to the distribution and impact of different alteration grades, resulting from variable tectonic stresses and/or atmospheric weathering. Therefore, a stochastic distribution model was developed to integrate the results of all simulations and to reconstruct both the landslide propagation and the evolution of its deposits. This study emphasizes the critical role of fragmentation and the volume involved in rockfalls and their runout behaviour. Furthermore, the method provides a framework for enhancing risk assessment in complex geological environments and for developing mitigation strategies, particularly regarding runout distance and block size. Full article
(This article belongs to the Section Natural Hazards)
Show Figures

Figure 1

23 pages, 8957 KB  
Article
Geometallurgical Cluster Creation in a Niobium Deposit Using Dual-Space Clustering and Hierarchical Indicator Kriging with Trends
by João Felipe C. L. Costa, Fernanda G. F. Niquini, Claudio L. Schneider, Rodrigo M. Alcântara, Luciano N. Capponi and Rafael S. Rodrigues
Minerals 2025, 15(7), 755; https://doi.org/10.3390/min15070755 - 19 Jul 2025
Viewed by 610
Abstract
Alkaline carbonatite complexes are formed by magmatic, hydrothermal, and weathering geological events, which modify the minerals present in the rocks, resulting in ores with varied metallurgical behavior. To better spatially distinguish ores with distinct plant responses, creating a 3D geometallurgical block model was [...] Read more.
Alkaline carbonatite complexes are formed by magmatic, hydrothermal, and weathering geological events, which modify the minerals present in the rocks, resulting in ores with varied metallurgical behavior. To better spatially distinguish ores with distinct plant responses, creating a 3D geometallurgical block model was necessary. To establish the clusters, four different algorithms were tested: K-Means, Hierarchical Agglomerative Clustering, dual-space clustering (DSC), and clustering by autocorrelation statistics. The chosen method was DSC, which can consider the multivariate and spatial aspects of data simultaneously. To better understand each cluster’s mineralogy, an XRD analysis was conducted, shedding light on why each cluster performs differently in the plant: cluster 0 contains high magnetite content, explaining its strong magnetic yield; cluster 3 has low pyrochlore, resulting in reduced flotation yield; cluster 2 shows high pyrochlore and low gangue minerals, leading to the best overall performance; cluster 1 contains significant quartz and monazite, indicating relevance for rare earth elements. A hierarchical indicator kriging workflow incorporating a stochastic partial differential equation (SPDE) trend model was applied to spatially map these domains. This improved the deposit’s circular geometry reproduction and better represented the lithological distribution. The elaborated model allowed the identification of four geometallurgical zones with distinct mineralogical profiles and processing behaviors, leading to a more robust model for operational decision-making. Full article
(This article belongs to the Special Issue Geostatistical Methods and Practices for Specific Ore Deposits)
Show Figures

Figure 1

21 pages, 447 KB  
Article
Aerodynamic Design of Wind Turbine Blades Using Multi-Fidelity Analysis and Surrogate Models
by Rosalba Cardamone, Riccardo Broglia, Francesco Papi, Franco Rispoli, Alessandro Corsini, Alessandro Bianchini and Alessio Castorrini
Int. J. Turbomach. Propuls. Power 2025, 10(3), 16; https://doi.org/10.3390/ijtpp10030016 - 16 Jul 2025
Viewed by 962
Abstract
A standard approach to design begins with scaling up state-of-the-art machines to new target dimensions, moving towards larger rotors with lower specific energy to maximize revenue and enable power production in lower wind speed areas. This trend is particularly crucial in floating offshore [...] Read more.
A standard approach to design begins with scaling up state-of-the-art machines to new target dimensions, moving towards larger rotors with lower specific energy to maximize revenue and enable power production in lower wind speed areas. This trend is particularly crucial in floating offshore wind in the Mediterranean Sea, where the high levelized cost of energy poses significant risks to the sustainability of investments in new projects. In this context, the conventional approach of scaling up machines designed for fixed foundations and strong offshore winds may not be optimal. Additionally, modern large-scale wind turbines for offshore applications face challenges in achieving high aerodynamic performance in thick root regions. This study proposes a holistic optimization framework that combines multi-fidelity analyses and tools to address the new challenges in wind turbine rotor design, accounting for the novel demands of this application. The method is based on a modular optimization framework for the aerodynamic design of a new wind turbine rotor, where the cost function block is defined with the aid of a model reduction strategy. The link between the full-order model required to evaluate the target rotor’s performance, the physical aspects of blade aerodynamics, and the optimization algorithm that needs several evaluations of the cost function is provided by the definition of a surrogate model (SM). An intelligent SM definition strategy is adopted to minimize the computational effort required to build a reliable model of the cost function. The strategy is based on the construction of a self-adaptive, automatic refinement of the training space, while the particular SM is defined by the use of stochastic radial basis functions. The goal of this paper is to describe the new aerodynamic design strategy, its performance, and results, presenting a case study of a 15 MW wind turbine blades optimized for specific deepwater sites in the Mediterranean Sea. Full article
Show Figures

Figure 1

20 pages, 5700 KB  
Article
Multimodal Personality Recognition Using Self-Attention-Based Fusion of Audio, Visual, and Text Features
by Hyeonuk Bhin and Jongsuk Choi
Electronics 2025, 14(14), 2837; https://doi.org/10.3390/electronics14142837 - 15 Jul 2025
Viewed by 1397
Abstract
Personality is a fundamental psychological trait that exerts a long-term influence on human behavior patterns and social interactions. Automatic personality recognition (APR) has exhibited increasing importance across various domains, including Human–Robot Interaction (HRI), personalized services, and psychological assessments. In this study, we propose [...] Read more.
Personality is a fundamental psychological trait that exerts a long-term influence on human behavior patterns and social interactions. Automatic personality recognition (APR) has exhibited increasing importance across various domains, including Human–Robot Interaction (HRI), personalized services, and psychological assessments. In this study, we propose a multimodal personality recognition model that classifies the Big Five personality traits by extracting features from three heterogeneous sources: audio processed using Wav2Vec2, video represented as Skeleton Landmark time series, and text encoded through Bidirectional Encoder Representations from Transformers (BERT) and Doc2Vec embeddings. Each modality is handled through an independent Self-Attention block that highlights salient temporal information, and these representations are then summarized and integrated using a late fusion approach to effectively reflect both the inter-modal complementarity and cross-modal interactions. Compared to traditional recurrent neural network (RNN)-based multimodal models and unimodal classifiers, the proposed model achieves an improvement of up to 12 percent in the F1-score. It also maintains a high prediction accuracy and robustness under limited input conditions. Furthermore, a visualization based on t-distributed Stochastic Neighbor Embedding (t-SNE) demonstrates clear distributional separation across the personality classes, enhancing the interpretability of the model and providing insights into the structural characteristics of its latent representations. To support real-time deployment, a lightweight thread-based processing architecture is implemented, ensuring computational efficiency. By leveraging deep learning-based feature extraction and the Self-Attention mechanism, we present a novel personality recognition framework that balances performance with interpretability. The proposed approach establishes a strong foundation for practical applications in HRI, counseling, education, and other interactive systems that require personalized adaptation. Full article
(This article belongs to the Special Issue Explainable Machine Learning and Data Mining)
Show Figures

Figure 1

34 pages, 3299 KB  
Project Report
On Control Synthesis of Hydraulic Servomechanisms in Flight Controls Applications
by Ioan Ursu, Daniela Enciu and Adrian Toader
Actuators 2025, 14(7), 346; https://doi.org/10.3390/act14070346 - 14 Jul 2025
Viewed by 568
Abstract
This paper presents some of the most significant findings in the design of a hydraulic servomechanism for flight controls, which were primarily achieved by the first author during his activity in an aviation institute. These results are grouped into four main topics. The [...] Read more.
This paper presents some of the most significant findings in the design of a hydraulic servomechanism for flight controls, which were primarily achieved by the first author during his activity in an aviation institute. These results are grouped into four main topics. The first one outlines a classical theory, from the 1950s–1970s, of the analysis of nonlinear automatic systems and namely the issue of absolute stability. The uninformed public may be misled by the adjective “absolute”. This is not a “maximalist” solution of stability but rather highlights in the system of equations a nonlinear function that describes, for the case of hydraulic servomechanisms, the flow-control dependence in the distributor spool. This function is odd, and it is therefore located in quadrants 1 and 3. The decision regarding stability is made within the so-called Lurie problem and is materialized by a matrix inequality, called the Lefschetz condition, which must be satisfied by the parameters of the electrohydraulic servomechanism and also by the components of the control feedback vector. Another approach starts from a classical theorem of V. M. Popov, extended in a stochastic framework by T. Morozan and I. Ursu, which ends with the description of the local and global spool valve flow-control characteristics that ensure stability in the large with respect to bounded perturbations for the mechano-hydraulic servomechanism. We add that a conjecture regarding the more pronounced flexibility of mathematical models in relation to mathematical instruments (theories) was used. Furthermore, the second topic concerns, the importance of the impedance characteristic of the mechano-hydraulic servomechanism in preventing flutter of the flight controls is emphasized. Impedance, also called dynamic stiffness, is defined as the ratio, in a dynamic regime, between the output exerted force (at the actuator rod of the servomechanism) and the displacement induced by this force under the assumption of a blocked input. It is demonstrated in the paper that there are two forms of the impedance function: one that favors the appearance of flutter and another that allows for flutter damping. It is interesting to note that these theoretical considerations were established in the institute’s reports some time before their introduction in the Aviation Regulation AvP.970. However, it was precisely the absence of the impedance criterion in the regulation at the appropriate time that ultimately led, by chance or not, to a disaster: the crash of a prototype due to tailplane flutter. A third topic shows how an important problem in the theory of automatic systems of the 1970s–1980s, namely the robust synthesis of the servomechanism, is formulated, applied and solved in the case of an electrohydraulic servomechanism. In general, the solution of a robust servomechanism problem consists of two distinct components: a servo-compensator, in fact an internal model of the exogenous dynamics, and a stabilizing compensator. These components are adapted in the case of an electrohydraulic servomechanism. In addition to the classical case mentioned above, a synthesis problem of an anti-windup (anti-saturation) compensator is formulated and solved. The fourth topic, and the last one presented in detail, is the synthesis of a fuzzy supervised neurocontrol (FSNC) for the position tracking of an electrohydraulic servomechanism, with experimental validation, in the laboratory, of this control law. The neurocontrol module is designed using a single-layered perceptron architecture. Neurocontrol is in principle optimal, but it is not free from saturation. To this end, in order to counteract saturation, a Mamdani-type fuzzy logic was developed, which takes control when neurocontrol has saturated. It returns to neurocontrol when it returns to normal, respectively, when saturation is eliminated. What distinguishes this FSNC law is its simplicity and efficiency and especially the fact that against quite a few opponents in the field, it still works very well on quite complicated physical systems. Finally, a brief section reviews some recent works by the authors, in which current approaches to hydraulic servomechanisms are presented: the backstepping control synthesis technique, input delay treated with Lyapunov–Krasovskii functionals, and critical stability treated with Lyapunov–Malkin theory. Full article
(This article belongs to the Special Issue Advanced Technologies in Actuators for Control Systems)
Show Figures

Figure 1

18 pages, 5615 KB  
Article
Experimental Investigation on IceBreaking Resistance and Ice Load Distribution for Comparison of Icebreaker Bows
by Xuhao Gang, Yukui Tian, Chaoge Yu, Ying Kou and Weihang Zhao
J. Mar. Sci. Eng. 2025, 13(6), 1190; https://doi.org/10.3390/jmse13061190 - 18 Jun 2025
Viewed by 3692
Abstract
During icebreaker navigation in ice-covered waters, icebreaking resistance and dynamic ice loads acting on the bow critically determine the vessel’s icebreaking performance. Quantitative characterization of the icebreaking resistance behavior and ice load distribution on the bow is essential for elucidating ship-ice interaction mechanisms, [...] Read more.
During icebreaker navigation in ice-covered waters, icebreaking resistance and dynamic ice loads acting on the bow critically determine the vessel’s icebreaking performance. Quantitative characterization of the icebreaking resistance behavior and ice load distribution on the bow is essential for elucidating ship-ice interaction mechanisms, assessing icebreaking capability, and optimizing structural design. This study conducted comparative icebreaking tests on two icebreaker bow models with distinct geometries in the small ice model basin of China Ship Scientific Research Center (CSSRC SIMB). Systematic measurements were performed to quantify icebreaking resistance, capture spatiotemporal ice load distributions, and document ice failure patterns under level ice conditions. The analysis reveals that bow geometry profoundly influences icebreaking efficiency: the stem angle governs the proportion of bending failure during vertical ice penetration, while the flare angle modulates circumferential failure modes along the hull-ice interface. Notably, the sunken keel configuration enhances ice clearance by mechanically expelling fractured ice blocks. Ice load distributions exhibit pronounced nonlinearity, with localized pressure concentrations and stochastic load center migration driven by ice fracture dynamics. Furthermore, icebreaking patterns—such as fractured ice dimensions and kinematic behavior during ship-ice interaction—are quantitatively correlated with the bow designs. These experimentally validated findings provide critical insights into ice-structure interaction physics, offering an empirical foundation for performance prediction and bow-form optimization in the preliminary design of icebreakers. Full article
Show Figures

Figure 1

25 pages, 2538 KB  
Article
Multi-Skilled Project Scheduling for High-End Equipment Development Considering Newcomer Cultivation and Duration Uncertainty
by Yaohui Liu, Ronggui Ding, Shanshan Liu and Lei Wang
Systems 2025, 13(6), 448; https://doi.org/10.3390/systems13060448 - 6 Jun 2025
Viewed by 557
Abstract
Traditional off-the-job training is becoming ineffective in high-end equipment research and development (R&D) projects due to the contradiction between rapid technological progress and the slow growth of newcomers, calling for “on-the-job mentoring” to enable synchronized advancement of project execution and newcomer cultivation. For [...] Read more.
Traditional off-the-job training is becoming ineffective in high-end equipment research and development (R&D) projects due to the contradiction between rapid technological progress and the slow growth of newcomers, calling for “on-the-job mentoring” to enable synchronized advancement of project execution and newcomer cultivation. For this, we propose the multi-skilled project scheduling problem with newcomer cultivation under uncertain durations (MSPSP-NCU) and abstract it as a stochastic programming model. The model aims to minimize expected makespan and maximize newcomers’ skill efficiency by optimizing workforce assignment that enables experienced workers to mentor newcomers while simultaneously optimizing task scheduling. Solving the model is blocked by the inherently NP-hard nature of the project scheduling problem and the stochasticity of the durations. Therefore, we put forward an adaptive simulation–optimization approach featuring two-fold: a simulation module capable of dynamically adjusting sample sizes based on convergence feedback and evaluating solutions with improved efficiency and stable accuracy; a tailored non-dominated sorting genetic algorithm II (NSGA-II) with adaptive evolutionary operators that enhance search effectiveness and ensure the identification of a well-distributed Pareto front. By using data from an aerospace component R&D project, the proposed approach is validated for its performance in identifying Pareto-optimal solutions. Several personalized rules are designed by integrating workforce development strategies into the selection process, providing actionable guidelines for cultivating newcomers in technology-intensive projects. Full article
(This article belongs to the Section Systems Practice in Social Science)
Show Figures

Figure 1

19 pages, 788 KB  
Article
Age of Information Minimization in Multicarrier-Based Wireless Powered Sensor Networks
by Juan Sun, Jingjie Xia, Shubin Zhang and Xinjie Yu
Entropy 2025, 27(6), 603; https://doi.org/10.3390/e27060603 - 5 Jun 2025
Viewed by 693
Abstract
This study investigates the challenge of ensuring timely information delivery in wireless powered sensor networks (WPSNs), where multiple sensors forward status-update packets to a base station (BS). Time is partitioned to multiple time blocks, with each time block dedicated to either data packet [...] Read more.
This study investigates the challenge of ensuring timely information delivery in wireless powered sensor networks (WPSNs), where multiple sensors forward status-update packets to a base station (BS). Time is partitioned to multiple time blocks, with each time block dedicated to either data packet transmission or energy transfer. Our objective is to minimize the long-term average weighted sum of the Age of Information (WAoI) for physical processes monitored by sensors. We formulate this optimization problem as a multi-stage stochastic optimization program. To tackle this intricate problem, we propose a novel approach that leverages Lyapunov optimization to transform the complex original problem into a sequence of per-time-bock deterministic problems. These deterministic problems are then solved using model-free deep reinforcement learning (DRL). Simulation results demonstrate that our proposed algorithm achieves significantly lower WAoI compared to the DQN, AoI-based greedy, and energy-based greedy algorithms. Furthermore, our method effectively mitigates the issue of excessive instantaneous AoI experienced by individual sensors compared to the DQN. Full article
Show Figures

Figure 1

25 pages, 10227 KB  
Article
Integrating Stochastic Geological Modeling and Injection–Production Optimization in Aquifer Underground Gas Storage: A Case Study of the Qianjiang Basin
by Yifan Xu, Zhixue Sun, Wei Chen, Beibei Yu, Jiqin Liu, Zhongxin Ren, Yueying Wang, Chenyao Guo, Ruidong Wu and Yufeng Jiang
Processes 2025, 13(6), 1728; https://doi.org/10.3390/pr13061728 - 31 May 2025
Cited by 1 | Viewed by 623
Abstract
Addressing the critical challenges of sealing integrity and operational optimization in aquifer gas storage (AGS), this study focuses on a block within the Qianjiang Basin to systematically investigate geological modeling and injection–production strategies. Utilizing 3D seismic interpretation, drilling, and logging data, a stochastic [...] Read more.
Addressing the critical challenges of sealing integrity and operational optimization in aquifer gas storage (AGS), this study focuses on a block within the Qianjiang Basin to systematically investigate geological modeling and injection–production strategies. Utilizing 3D seismic interpretation, drilling, and logging data, a stochastic geological modeling approach was employed to construct a high-resolution 3D reservoir model, elucidating the distribution of reservoir properties and trap configurations. Numerical simulations optimized the gas storage parameters, yielding an injection rate of 160 MMSCF/day (40 MMSCF/well/day) over 6-month-long hot seasons and a production rate of 175 MMSCF/day during 5-month-long cold seasons. Interval theory was innovatively applied to assess fault stability under parameter uncertainty, determining a maximum safe operating pressure of 23.5 MPa—12.3% lower than conventional deterministic results. The non-probabilistic reliability analysis of caprock integrity showed a maximum 11.1% deviation from Monte Carlo simulations, validating the method’s robustness. These findings establish a quantitative framework for site selection, sealing system evaluation, and operational parameter design in AGS projects, offering critical insights to ensure safe and efficient gas storage operations. This work bridges theoretical modeling with practical engineering applications, providing actionable guidelines for large-scale AGS deployment. Full article
Show Figures

Figure 1

15 pages, 689 KB  
Article
Approximation of General Functions Using Stochastic Computing
by Adriel Kind
Electronics 2025, 14(9), 1845; https://doi.org/10.3390/electronics14091845 - 30 Apr 2025
Viewed by 802
Abstract
Stochastic computing (SC) is a computational paradigm that represents numbers using Bernoulli processes. It is attractive due to its low power, low hardware complexity, and excellent noise tolerance properties. An SC algorithm that approximates arbitrary functions on [1,1] [...] Read more.
Stochastic computing (SC) is a computational paradigm that represents numbers using Bernoulli processes. It is attractive due to its low power, low hardware complexity, and excellent noise tolerance properties. An SC algorithm that approximates arbitrary functions on [1,1] using a partial sum of Chebyshev polynomials is presented. The new method displays several advantages over existing SC methods for function approximation, including the accurate modelling of complicated functions with relatively few terms, the fact that an output sample is produced for every input sample, and the extension of the domain and range to negative numbers. As a building block of the complete algorithm, an efficient method of computing the dot product of two vectors is presented, which has a broad utility beyond the application presented here. Full article
(This article belongs to the Special Issue Stochastic Computing and Its Application)
Show Figures

Figure 1

23 pages, 3481 KB  
Article
Evaluating QoS in Dynamic Virtual Machine Migration: A Multi-Class Queuing Model for Edge-Cloud Systems
by Anna Kushchazli, Kseniia Leonteva, Irina Kochetkova and Abdukodir Khakimov
J. Sens. Actuator Netw. 2025, 14(3), 47; https://doi.org/10.3390/jsan14030047 - 25 Apr 2025
Viewed by 1284
Abstract
The efficient migration of virtual machines (VMs) is critical for optimizing resource management, ensuring service continuity, and enhancing resiliency in cloud and edge computing environments, particularly as 6G networks demand higher reliability and lower latency. This study addresses the challenges of dynamically balancing [...] Read more.
The efficient migration of virtual machines (VMs) is critical for optimizing resource management, ensuring service continuity, and enhancing resiliency in cloud and edge computing environments, particularly as 6G networks demand higher reliability and lower latency. This study addresses the challenges of dynamically balancing server loads while minimizing downtime and migration costs under stochastic task arrivals and variable processing times. We propose a queuing theory-based model employing continuous-time Markov chains (CTMCs) to capture the interplay between VM migration decisions, server resource constraints, and task processing dynamics. The model incorporates two migration policies—one minimizing projected post-migration server utilization and another prioritizing current utilization—to evaluate their impact on system performance. The numerical results show that the blocking probability for the first VM for Policy 1 is 2.1% times lower than for Policy 2 and the same metric for the second VM is 4.7%. The average server’s resource utilization increased up to 11.96%. The framework’s adaptability to diverse server–VM configurations and stochastic demands demonstrates its applicability to real-world cloud systems. These results highlight predictive resource allocation’s role in dynamic environments. Furthermore, the study lays the groundwork for extending this framework to multi-access edge computing (MEC) environments, which are integral to 6G networks. Full article
(This article belongs to the Section Communications and Networking)
Show Figures

Figure 1

21 pages, 5421 KB  
Article
Probabilistic Characterization and Machine Learning-Based Modeling of Conducted Emissions of Programmable Microcontrollers
by Aishwarya Gavai, Dipanjan Gope, Vivek Dhoot and Jan Hansen
Electronics 2025, 14(8), 1511; https://doi.org/10.3390/electronics14081511 - 9 Apr 2025
Cited by 2 | Viewed by 794
Abstract
To evaluate the system-level electromagnetic emissions of an automobile, in a simulation environment, individual sources of electromagnetic interference (EMI) must be modeled. Modeling the emission behavior of electronic controller units (ECUs) in automobiles requires the characterization of the main microcontroller as an emission [...] Read more.
To evaluate the system-level electromagnetic emissions of an automobile, in a simulation environment, individual sources of electromagnetic interference (EMI) must be modeled. Modeling the emission behavior of electronic controller units (ECUs) in automobiles requires the characterization of the main microcontroller as an emission source. The conducted emissions (CEs) of the microcontroller depend on the programs running inside the core and the functional blocks utilized for those programs. We propose a measurement-based stochastic model to characterize such program dependency on the conducted emissions of a microcontroller. From board-level CE measurements and the functional blocks utilized for the programs, we obtain the upper and lower bounds of the emissions within which the microcontroller’s conducted emissions are expected to lie with a 95% confidence interval. Further, we propose a method to estimate the emission peaks of the microcontroller when a new program runs in its core with a ±4 dB average error. The functional blocks used for this analysis involve the RAM, timers, and GPIOs. The method works satisfactorily at frequencies up to 500 MHz and is tested on an STM32 general-purpose microcontroller board. Full article
Show Figures

Figure 1

25 pages, 4732 KB  
Article
Analysis of Core–Periphery Structure Based on Clustering Aggregation in the NFT Transfer Network
by Zijuan Chen, Jianyong Yu, Yulong Wang and Jinfang Xie
Entropy 2025, 27(4), 342; https://doi.org/10.3390/e27040342 - 26 Mar 2025
Viewed by 797
Abstract
With the rise of blockchain technology and the Ethereum platform, non-fungible tokens (NFTs) have emerged as a new class of digital assets. The NFT transfer network exhibits core–periphery structures derived from different partitioning methods, leading to local discrepancies and global diversity. We propose [...] Read more.
With the rise of blockchain technology and the Ethereum platform, non-fungible tokens (NFTs) have emerged as a new class of digital assets. The NFT transfer network exhibits core–periphery structures derived from different partitioning methods, leading to local discrepancies and global diversity. We propose a core–periphery structure characterization method based on Bayesian and stochastic block models (SBMs). This method incorporates prior knowledge to improve the fit of core–periphery structures obtained from various partitioning methods. Additionally, we introduce a locally weighted core–periphery structure aggregation (LWCSA) scheme, which determines local aggregation weights using the minimum description length (MDL) principle. This approach results in a more accurate and representative core–periphery structure. The experimental results indicate that core nodes in the NFT transfer network constitute approximately 2.3–5% of all nodes. Compared to baseline methods, our approach improves the normalized mutual information (NMI) index by 6–10%, demonstrating enhanced structural representation. This study provides a theoretical foundation for further analysis of the NFT market. Full article
(This article belongs to the Special Issue Entropy, Econophysics, and Complexity)
Show Figures

Figure 1

Back to TopTop