Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,265)

Search Parameters:
Keywords = stochastic processing times

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 4961 KB  
Article
Automation and Genetic Algorithm Optimization for Seismic Modeling and Analysis of Tall RC Buildings
by Piero A. Cabrera, Gianella M. Medina and Rick M. Delgadillo
Buildings 2025, 15(19), 3618; https://doi.org/10.3390/buildings15193618 - 9 Oct 2025
Viewed by 197
Abstract
This article presents an innovative approach to optimizing the seismic modeling and analysis of high-rise buildings by automating the process with Python 3.13 and the ETABS 22.1.0 API. The process begins with the collection of information on the base building, a structure of [...] Read more.
This article presents an innovative approach to optimizing the seismic modeling and analysis of high-rise buildings by automating the process with Python 3.13 and the ETABS 22.1.0 API. The process begins with the collection of information on the base building, a structure of seventeen regular levels, which includes data from structural elements, material properties, geometric configuration, and seismic and gravitational loads. These data are organized in an Excel file for further processing. From this information, a code is developed in Python that automates the structural modeling in ETABS through its API. This code defines the sections, materials, edge conditions, and loads and models the elements according to their coordinates. The resulting base model is used as a starting point to generate an optimal solution using a genetic algorithm. The genetic algorithm adjusts column and beam sections using an approach that includes crossover and controlled mutation operations. Each solution is evaluated by the maximum displacement of the structure, calculating the fitness as the inverse of this displacement, favoring solutions with less deformation. The process is repeated across generations, selecting and crossing the best solutions. Finally, the model that generates the smallest displacement is saved as the optimal solution. Once the optimal solution has been obtained, it is implemented a second code in Python is implemented to perform static and dynamic seismic analysis. The key results, such as displacements, drifts, internal and basal shear forces, are processed and verified in accordance with the Peruvian Technical Standard E.030. The automated model with API shows a significant improvement in accuracy and efficiency compared to traditional methods, highlighting an R2 = 0.995 in the static analysis, indicating an almost perfect fit, and an RMSE = 1.93261 × 10−5, reflecting a near-zero error. In the dynamic drift analysis, the automated model reaches an R2 = 0.9385 and an RMSE = 5.21742 × 10−5, demonstrating its high precision. As for the lead time, the model automated completed the process in 13.2 min, which means a 99.5% reduction in comparison with the traditional method, which takes 3 h. On the other hand, the genetic algorithm had a run time of 191 min due to its stochastic nature and iterative process. The performance of the genetic algorithm shows that although the improvement is significant between Generation 1 and Generation 2, is stabilized in the following generations, with a slight decrease in Generation 5, suggesting that the algorithm has reached its level has reached a point of convergence. Full article
(This article belongs to the Special Issue Building Safety Assessment and Structural Analysis)
Show Figures

Figure 1

29 pages, 3821 KB  
Article
Mathematical Framework for Digital Risk Twins in Safety-Critical Systems
by Igor Kabashkin
Mathematics 2025, 13(19), 3222; https://doi.org/10.3390/math13193222 - 8 Oct 2025
Viewed by 211
Abstract
This paper introduces a formal mathematical framework for Digital Risk Twins (DRTs) as an extension of traditional digital twin (DT) architectures, explicitly tailored to the needs of safety-critical systems. While conventional DTs enable real-time monitoring and simulation of physical assets, they often lack [...] Read more.
This paper introduces a formal mathematical framework for Digital Risk Twins (DRTs) as an extension of traditional digital twin (DT) architectures, explicitly tailored to the needs of safety-critical systems. While conventional DTs enable real-time monitoring and simulation of physical assets, they often lack structured mechanisms to model stochastic failure processes; evaluate dynamic risk; or support resilient, risk-aware decision-making. The proposed DRT framework addresses these limitations by embedding probabilistic hazard modeling, reliability theory, and coherent risk measures into a modular and mathematically interpretable structure. The DT to DRT transformation is formalized as a composition of operators that project system trajectories onto risk-relevant features, compute failure intensities, and evaluate risk metrics under uncertainty. The framework supports layered integration of simulation, feature extraction, hazard dynamics, and decision-oriented evaluation, providing traceability, scalability, and explainability. Its utility is demonstrated through a case study involving an aircraft brake system, showcasing early warning detection, inspection schedule optimization, and visual risk interpretation. The results confirm that the DRT enables modular, explainable, and domain-agnostic integration of reliability logic into digital twin systems, enhancing their value in safety-critical applications. Full article
Show Figures

Figure 1

19 pages, 1035 KB  
Article
Spectral Bounds and Exit Times for a Stochastic Model of Corruption
by José Villa-Morales
Math. Comput. Appl. 2025, 30(5), 111; https://doi.org/10.3390/mca30050111 - 8 Oct 2025
Viewed by 95
Abstract
We study a stochastic differential model for the dynamics of institutional corruption, extending a deterministic three-variable system—corruption perception, proportion of sanctioned acts, and policy laxity—by incorporating Gaussian perturbations into key parameters. We prove global existence and uniqueness of solutions in the physically relevant [...] Read more.
We study a stochastic differential model for the dynamics of institutional corruption, extending a deterministic three-variable system—corruption perception, proportion of sanctioned acts, and policy laxity—by incorporating Gaussian perturbations into key parameters. We prove global existence and uniqueness of solutions in the physically relevant domain, and we analyze the linearization around the asymptotically stable equilibrium of the deterministic system. Explicit mean square bounds for the linearized process are derived in terms of the spectral properties of a symmetric matrix, providing insight into the temporal validity of the linear approximation. To investigate global behavior, we relate the first exit time from the domain of interest to backward Kolmogorov equations and numerically solve the associated elliptic and parabolic PDEs with FreeFEM, obtaining estimates of expectations and survival probabilities. An application to the case of Mexico highlights nontrivial effects: while the spectral structure governs local stability, institutional volatility can non-monotonically accelerate global exit, showing that highly reactive interventions without effective sanctions increase uncertainty. Policy implications and possible extensions are discussed. Full article
(This article belongs to the Section Social Sciences)
Show Figures

Figure 1

28 pages, 1332 KB  
Article
A Scalable Two-Level Deep Reinforcement Learning Framework for Joint WIP Control and Job Sequencing in Flow Shops
by Maria Grazia Marchesano, Guido Guizzi, Valentina Popolo and Anastasiia Rozhok
Appl. Sci. 2025, 15(19), 10705; https://doi.org/10.3390/app151910705 - 3 Oct 2025
Viewed by 259
Abstract
Effective production control requires aligning strategic planning with real-time execution under dynamic and stochastic conditions. This study proposes a scalable dual-agent Deep Reinforcement Learning (DRL) framework for the joint optimisation of Work-In-Process (WIP) control and job sequencing in flow-shop environments. A strategic DQN [...] Read more.
Effective production control requires aligning strategic planning with real-time execution under dynamic and stochastic conditions. This study proposes a scalable dual-agent Deep Reinforcement Learning (DRL) framework for the joint optimisation of Work-In-Process (WIP) control and job sequencing in flow-shop environments. A strategic DQN agent regulates global WIP to meet throughput targets, while a tactical DQN agent adaptively selects dispatching rules at the machine level on an event-driven basis. Parameter sharing in the tactical agent ensures inherent scalability, overcoming the combinatorial complexity of multi-machine scheduling. The agents coordinate indirectly via a shared simulation environment, learning to balance global stability with local responsiveness. The framework is validated through a discrete-event simulation integrating agent-based modelling, demonstrating consistent performance across multiple production scales (5–15 machines) and process time variabilities. Results show that the approach matches or surpasses analytical benchmarks and outperforms static rule-based strategies, highlighting its robustness, adaptability, and potential as a foundation for future Hierarchical Reinforcement Learning applications in manufacturing. Full article
(This article belongs to the Special Issue Intelligent Manufacturing and Production)
Show Figures

Figure 1

33 pages, 752 KB  
Article
Flux and First-Passage Time Distributions in One-Dimensional Integrated Stochastic Processes with Arbitrary Temporal Correlation and Drift
by Holger Nobach and Stephan Eule
Mathematics 2025, 13(19), 3163; https://doi.org/10.3390/math13193163 - 2 Oct 2025
Viewed by 196
Abstract
The arrival of tracers at boundaries with defined distances from the origin of their motion in stochastically fluctuating advection processes is investigated. The advection model is a stationary one-dimensional integrated stochastic process with an arbitrary a priori known correlation and with possible mean [...] Read more.
The arrival of tracers at boundaries with defined distances from the origin of their motion in stochastically fluctuating advection processes is investigated. The advection model is a stationary one-dimensional integrated stochastic process with an arbitrary a priori known correlation and with possible mean drift. The current (direction-sensitive), the total flux (direction-insensitive) of tracers through a non-absorbing boundary, and the first-passage times of the tracers at an absorbing boundary are derived depending on the correlation function of the carrying flow velocity. While the general derivations are universal with respect to the distribution function of the advection’s increments, the current and the total flux are explicitly derived for a Gaussian distribution. The first-passage time is derived implicitly through an integral that is solved numerically in the present study. No approximations or restrictions to special cases of the advection process are used. One application is one-dimensional Gaussian turbulence, where the one-dimensional random velocity carries tracer particles through space. Finally, subdiffusive or superdiffusive behavior can temporarily be reached by such a stochastic process with an adequately designed correlation function. Full article
Show Figures

Figure 1

18 pages, 382 KB  
Article
Self-Organized Criticality and Quantum Coherence in Tubulin Networks Under the Orch-OR Theory
by José Luis Díaz Palencia
AppliedMath 2025, 5(4), 132; https://doi.org/10.3390/appliedmath5040132 - 2 Oct 2025
Viewed by 268
Abstract
We present a theoretical model to explain how tubulin dimers in neuronal microtubules might achieve collective quantum coherence, resulting in wavefunction collapses that manifest as avalanches within a self-organized criticality (SOC) framework. Using the Orchestrated Objective Reduction (Orch-OR) theory as inspiration, we propose [...] Read more.
We present a theoretical model to explain how tubulin dimers in neuronal microtubules might achieve collective quantum coherence, resulting in wavefunction collapses that manifest as avalanches within a self-organized criticality (SOC) framework. Using the Orchestrated Objective Reduction (Orch-OR) theory as inspiration, we propose that microtubule subunits (tubulins) become transiently entangled via dipole–dipole couplings, forming coherent domains susceptible to sudden self-collapse. We model a network of tubulin-like nodes with scale-free (Barabási–Albert) connectivity, each evolving via local coupling and stochastic noise. Near criticality, the system exhibits power-law avalanches—abrupt collective state changes that we identify with instantaneous quantum wavefunction collapse events. Using the Diósi–Penrose gravitational self-energy formula, we estimate objective reduction times TOR=/Eg for these events in the 10–200 ms range, consistent with the Orch-OR conscious moment timescale. Our results demonstrate that quantum coherence at the tubulin level can be amplified by scale-free critical dynamics, providing a possible bridge between sub-neuronal quantum processes and large-scale neural activity. Full article
Show Figures

Figure 1

16 pages, 548 KB  
Article
Zonotope-Based State Estimation for Boost Converter System with Markov Jump Process
by Chaoxu Guan, You Li, Zhenyu Wang and Weizhong Chen
Micromachines 2025, 16(10), 1099; https://doi.org/10.3390/mi16101099 - 27 Sep 2025
Viewed by 230
Abstract
This article investigates the zonotope-based state estimation for boost converter system with Markov jump process. DC-DC boost converters are pivotal in modern power electronics, enabling renewable energy integration, electric vehicle charging, and microgrid operations by elevating low input voltages from sources like photovoltaics [...] Read more.
This article investigates the zonotope-based state estimation for boost converter system with Markov jump process. DC-DC boost converters are pivotal in modern power electronics, enabling renewable energy integration, electric vehicle charging, and microgrid operations by elevating low input voltages from sources like photovoltaics to stable high outputs. However, their nonlinear dynamics and sensitivity to uncertainties/disturbances degrade control precision, driving research into robust state estimation. To address these challenges, the boost converter is modeled as a Markov jump system to characterize stochastic switching, with time delays, disturbances, and noises integrated for a generalized discrete-time model. An adaptive event-triggered mechanism is adopted to administrate the data transmission to conserve communication resources. A zonotopic set-membership estimation design is proposed, which involves designing an observer for the augmented system to ensure H performance and developing an algorithm to construct zonotopes that enclose all system states. Finally, numerical simulations are performed to verify the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

52 pages, 6335 KB  
Article
On Sampling-Times-Independent Identification of Relaxation Time and Frequency Spectra Models of Viscoelastic Materials Using Stress Relaxation Experiment Data
by Anna Stankiewicz, Sławomir Juściński and Marzena Błażewicz-Woźniak
Materials 2025, 18(18), 4403; https://doi.org/10.3390/ma18184403 - 21 Sep 2025
Viewed by 240
Abstract
Viscoelastic relaxation time and frequency spectra are useful for describing, analyzing, comparing, and improving the mechanical properties of materials. The spectra are typically obtained using the stress or oscillatory shear measurements. Over the last 80 years, dozens of mathematical models and algorithms were [...] Read more.
Viscoelastic relaxation time and frequency spectra are useful for describing, analyzing, comparing, and improving the mechanical properties of materials. The spectra are typically obtained using the stress or oscillatory shear measurements. Over the last 80 years, dozens of mathematical models and algorithms were proposed to identify relaxation spectra models using different analytical and numerical tools. Some models and identification algorithms are intended for specific materials, while others are general and can be applied for an arbitrary rheological material. The identified relaxation spectrum model always depends on the identification method applied and on the specific measurements used in the identification process. The stress relaxation experiment data consist of the sampling times used in the experiment and the noise-corrupted relaxation modulus measurements. The aim of this paper is to build a model of the spectrum that asymptotically does not depend on the sampling times used in the experiment as the number of measurements tends to infinity. Broad model classes, determined by a finite series of various basis functions, are assumed for the relaxation spectra approximation. Both orthogonal series expansions based on the Legendre, Laguerre, and Chebyshev functions and non-orthogonal basis functions, like power exponential and modified Bessel functions of the second kind, are considered. It is proved that, even when the true spectrum description is entirely unfamiliar, the approximate sampling-times-independent spectra optimal models can be determined using modulus measurements for appropriately randomly selected sampling times. The recovered spectra models are strongly consistent estimates of the desirable models corresponding to the relaxation modulus models, being optimal for the deterministic integral weighted square error. A complete identification algorithm leading to the relaxation spectra models is presented that requires solving a sequence of weighted least-squares relaxation modulus approximation problems and a random selection of the sampling times. The problems of relaxation spectra identification are ill-posed; solution stability is ensured by applying Tikhonov regularization. Stochastic convergence analysis is conducted and the convergence with an exponential rate is demonstrated. Simulation studies are presented for the Kohlrausch–Williams–Watts spectrum with short relaxation times, the uni- and double-mode Gauss-like spectra with intermediate relaxation times, and the Baumgaertel–Schausberger–Winter spectrum with long relaxation times. Models using spectrum expansions on different basis series are applied. These studies have shown that sampling times randomization provides the sequence of the optimal spectra models that asymptotically converge to sampling-times-independent models. The noise robustness of the identified model was shown both by analytical analysis and numerical studies. Full article
Show Figures

Figure 1

24 pages, 1881 KB  
Article
Multiscale Stochastic Models for Bitcoin: Fractional Brownian Motion and Duration-Based Approaches
by Arthur Rodrigues Pereira de Carvalho, Felipe Quintino, Helton Saulo, Luan C. S. M. Ozelim, Tiago A. da Fonseca and Pushpa N. Rathie
FinTech 2025, 4(3), 51; https://doi.org/10.3390/fintech4030051 - 19 Sep 2025
Viewed by 406
Abstract
This study introduces and evaluates stochastic models to describe Bitcoin price dynamics at different time scales, using daily data from January 2019 to December 2024 and intraday data from 20 January 2025. In the daily analysis, models based on are introduced to capture [...] Read more.
This study introduces and evaluates stochastic models to describe Bitcoin price dynamics at different time scales, using daily data from January 2019 to December 2024 and intraday data from 20 January 2025. In the daily analysis, models based on are introduced to capture long memory, paired with both constant-volatility (CONST) and stochastic-volatility specifications via the Cox–Ingersoll–Ross (CIR) process. The novel family of models is based on Generalized Ornstein–Uhlenbeck processes with a fluctuating exponential trend (GOU-FE), which are modified to account for multiplicative fBm noise. Traditional Geometric Brownian Motion processes (GFBM) with either constant or stochastic volatilities are employed as benchmarks for comparative analysis, bringing the total number of evaluated models to four: GFBM-CONST, GFBM-CIR, GOUFE-CONST, and GOUFE-CIR models. Estimation by numerical optimization and evaluation through error metrics, information criteria (AIC, BIC, and EDC), and 95% Expected Shortfall (ES95) indicated better fit for the stochastic-volatility models (GOUFE-CIR and GFBM-CIR) and the lowest tail-risk for GOUFE-CIR, although residual analysis revealed heteroscedasticity and non-normality. For intraday data, Exponential, Weibull, and Generalized Gamma Autoregressive Conditional Duration (ACD) models, with adjustments for intraday patterns, were applied to model the time between transactions. Results showed that the ACD models effectively capture duration clustering, with the Generalized Gamma version exhibiting superior fit according to the Cox–Snell residual-based analysis and other metrics (AIC, BIC, and mean-squared error). Overall, this work advances the modeling of Bitcoin prices by rigorously applying and comparing stochastic frameworks across temporal scales, highlighting the critical roles of long memory, stochastic volatility, and intraday dynamics in understanding the behavior of this digital asset. Full article
Show Figures

Figure 1

22 pages, 373 KB  
Article
Translation Theorem for Conditional Function Space Integrals and Applications
by Sang Kil Shim and Jae Gil Choi
Mathematics 2025, 13(18), 3022; https://doi.org/10.3390/math13183022 - 18 Sep 2025
Viewed by 278
Abstract
The conditional Feynman integral provides solutions to integral equations equivalent to heat and Schrödinger equations. The Cameron–Martin translation theorem illustrates how the Wiener measure changes under translation via Cameron–Martin space elements in abstract Wiener space. Translation theorems for analytic Feynman integrals have been [...] Read more.
The conditional Feynman integral provides solutions to integral equations equivalent to heat and Schrödinger equations. The Cameron–Martin translation theorem illustrates how the Wiener measure changes under translation via Cameron–Martin space elements in abstract Wiener space. Translation theorems for analytic Feynman integrals have been established in many research articles. This study aims to present a translation theorem for the conditional function space integral of functionals on the generalized Wiener space Ca,b[0,T] induced via a generalized Brownian motion process determined using continuous functions a(t) and b(t). As an application, we establish a translation theorem for the conditional generalized analytic Feynman integral of functionals on Ca,b[0,T]. We then provide explicit examples of functionals on Ca,b[0,T] to which the conditional translation theorem on Ca,b[0,T] can be applied. Our formulas and results are more complicated than the corresponding formulas and results in the previous research on the Wiener space C0[0,T] because the generalized Brownian motion process used in this study is neither stationary in time nor centered. In this study, the stochastic process used is subject to a drift function. Full article
(This article belongs to the Special Issue Advanced Research in Functional Analysis and Operator Theory)
19 pages, 1124 KB  
Article
A Comparative Study on COVID-19 Dynamics: Mathematical Modeling, Predictions, and Resource Allocation Strategies in Romania, Italy, and Switzerland
by Cristina-Maria Stăncioi, Iulia Adina Ștefan, Violeta Briciu, Vlad Mureșan, Iulia Clitan, Mihail Abrudean, Mihaela-Ligia Ungureșan, Radu Miron, Ecaterina Stativă, Roxana Carmen Cordoș, Adriana Topan and Ioana Nanu
Bioengineering 2025, 12(9), 991; https://doi.org/10.3390/bioengineering12090991 - 18 Sep 2025
Viewed by 499
Abstract
This research provides valuable insights into the application of mathematical modeling to real-world scenarios, as exemplified by the COVID-19 pandemic. After data collection, the preparation stage included exploratory analysis, standardization and normalization, computation, and validation. A mathematical model initially developed for COVID-19 dynamics [...] Read more.
This research provides valuable insights into the application of mathematical modeling to real-world scenarios, as exemplified by the COVID-19 pandemic. After data collection, the preparation stage included exploratory analysis, standardization and normalization, computation, and validation. A mathematical model initially developed for COVID-19 dynamics in Romania was subsequently applied to data from Italy and Switzerland during the same time interval. The model is structured as a multiple-input single-output (MISO) system, where the inputs underwent a neural network-based training stage to address inconsistencies in the acquired data. In parallel, an ARMAX model was employed to capture the stochastic nature of the epidemic process. Results demonstrate that the Romanian-based model generalized effectively across the three countries, achieving a strong predictive accuracy (forecast accuracy > 98.59%). Importantly, the model maintained robust performance despite significant cross-country differences in testing strategies, policy measures, timing of initial cases, and imported infections. This work contributes a novel perspective by showing that a unified data-driven modeling framework can be transferable across heterogeneous contexts. More broadly, it underscores the potential of integrating mathematical modeling with predictive analytics to support evidence-based decision-making and strengthen preparedness for future global health crises. Full article
(This article belongs to the Special Issue Data Modeling and Algorithms in Biomedical Applications)
Show Figures

Graphical abstract

26 pages, 1253 KB  
Article
Integrated Production, EWMA Scheme, and Maintenance Policy for Imperfect Manufacturing Systems of Bolt-On Vibroseis Equipment Considering Quality and Inventory Constraints
by Nuan Xia, Zilin Lu, Yuting Zhang and Jundong Fu
Axioms 2025, 14(9), 703; https://doi.org/10.3390/axioms14090703 - 17 Sep 2025
Viewed by 259
Abstract
In recent years, the synergistic effect among production, maintenance, and quality control within manufacturing systems has garnered increasing attention in academic and industrial circles. In high-quality production settings, the real-time identification of minute process deviations holds significant importance for ensuring product quality. Traditional [...] Read more.
In recent years, the synergistic effect among production, maintenance, and quality control within manufacturing systems has garnered increasing attention in academic and industrial circles. In high-quality production settings, the real-time identification of minute process deviations holds significant importance for ensuring product quality. Traditional approaches, such as routine quality inspections or Shewhart control charts, exhibit limitations in sensitivity and response speed, rendering them inadequate for meeting the stringent requirements of high-precision quality control. To address this issue, this paper presents an integrated framework that seamlessly integrates stochastic process modeling, dynamic optimization, and quality monitoring. In the realm of quality monitoring, an exponentially weighted moving average (EWMA) control chart is employed to monitor the production process. The statistic derived from this chart forms a Markov process, enabling it to more acutely detect minor shifts in the process mean. Regarding maintenance strategies, a state-dependent preventive maintenance (PM) and corrective maintenance (CM) mechanism is introduced. Specifically, preventive maintenance is initiated when the system is in a statistically controlled state and the inventory level falls below a predefined threshold. Conversely, corrective maintenance is triggered when the EWMA control chart generates an out-of-control (OOC) signal. To facilitate continuous production during maintenance activities, an inventory buffer mechanism is incorporated into the model. Building upon this foundation, a joint optimization model is formulated, with system states, including equipment degradation state, inventory level, and quality state, serving as decision variables and the minimization of the expected total cost (ETC) per unit time as the objective. This problem is formalized as a constrained dynamic optimization problem and is solved using the genetic algorithm (GA). Finally, through a case study of the production process of vibroseis equipment, the superiority of the proposed model in terms of cost savings and system performance enhancement is empirically verified. Full article
Show Figures

Figure 1

42 pages, 8013 KB  
Article
Adaptive Neural Network System for Detecting Unauthorised Intrusions Based on Real-Time Traffic Analysis
by Serhii Vladov, Victoria Vysotska, Vasyl Lytvyn, Anatolii Komziuk, Oleksandr Prokudin and Andrii Ostapiuk
Computation 2025, 13(9), 221; https://doi.org/10.3390/computation13090221 - 11 Sep 2025
Viewed by 368
Abstract
This article solves the anomalies’ operational detection in the network traffic problem for cyber police units by developing an adaptive neural network platform combining a variational autoencoder with continuous stochastic dynamics of the latent space (integration according to the Euler–Maruyama scheme), a continuous–discrete [...] Read more.
This article solves the anomalies’ operational detection in the network traffic problem for cyber police units by developing an adaptive neural network platform combining a variational autoencoder with continuous stochastic dynamics of the latent space (integration according to the Euler–Maruyama scheme), a continuous–discrete Kalman filter for latent state estimation, and Hotelling’s T2 statistical criterion for deviation detection. This paper implements an online learning mechanism (“on the fly”) via the Euler Euclidean gradient step. Verification includes variational autoencoder training and validation, ROC/PR and confusion matrix analysis, latent representation projections (PCA), and latency measurements during streaming processing. The model’s stable convergence and anomalies’ precise detection with the metrics precision is ≈0.83, recall is ≈0.83, the F1-score is ≈0.83, and the end-to-end delay of 1.5–6.5 ms under 100–1000 sessions/s load was demonstrated experimentally. The computational estimate for typical model parameters is ≈5152 operations for a forward pass and ≈38,944 operations, taking into account batch updating. At the same time, the main bottleneck, the O(m3) term in the Kalman step, was identified. The obtained results’ practical significance lies in the possibility of the developed adaptive neural network platform integrating into cyber police units (integration with Kafka, Spark, or Flink; exporting incidents to SIEM or SOAR; monitoring via Prometheus or Grafana) and in proposing applied optimisation paths for embedded and high-load systems. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

22 pages, 4352 KB  
Article
Risk-Based Analysis of Manufacturing Lead Time in Production Lines
by Oleh Pihnastyi, Anna Burduk and Dagmara Łapczyńska
Appl. Sci. 2025, 15(18), 9917; https://doi.org/10.3390/app15189917 - 10 Sep 2025
Viewed by 362
Abstract
The paper proposes a method for assessing production risks related to potential exceedances of the agreed production lead time for batches of details in small and medium-sized enterprises. The study focuses on a linear production system composed of sequential technological operations, analyzed within [...] Read more.
The paper proposes a method for assessing production risks related to potential exceedances of the agreed production lead time for batches of details in small and medium-sized enterprises. The study focuses on a linear production system composed of sequential technological operations, analyzed within the broader context of production and logistics processes. A stochastic model of the production flow has been developed, using dimensionless parameters to describe the state and trajectory of a product in a multidimensional technological space. The internal and external risk factors that affect the duration of operations are taken into account, including equipment failures, delays in material deliveries and labor availability. Analytical expressions enabling the quantitative assessment of the risk of production deadline violations and the resulting losses have been derived. The proposed method was validated on a production line for manufacturing wooden single-leaf windows. The results indicate that the presence of inter-operational reserves significantly reduces the probability of exceeding production deadlines and enhances the stability of the production process under stochastic disturbances. The use of inter-operational buffers in most cases ensured a reduction in the processing time of experimental batches of products by 18–25% and simultaneously led to a reduction in the level of production risk by several times, which confirms the effectiveness of the proposed approach and its practical significance for increasing the sustainability of production systems. Full article
(This article belongs to the Special Issue Advances in Intelligent Logistics System and Supply Chain Management)
Show Figures

Figure 1

24 pages, 5495 KB  
Article
Self-Organization in Metal Plasticity: An ILG Update
by Avraam Konstantinidis, Konstantinos Spiliotis, Amit Chattopadhyay and Elias C. Aifantis
Metals 2025, 15(9), 1006; https://doi.org/10.3390/met15091006 - 10 Sep 2025
Viewed by 282
Abstract
In a 1987 article of the last author dedicated to the memory of a pioneer of classical plasticity Aris Philips of Yale, the last author outlined three examples of self-organization during plastic deformation in metals: persistent slip bands (PSBs), shear bands (SBs) and [...] Read more.
In a 1987 article of the last author dedicated to the memory of a pioneer of classical plasticity Aris Philips of Yale, the last author outlined three examples of self-organization during plastic deformation in metals: persistent slip bands (PSBs), shear bands (SBs) and Portevin Le Chatelier (PLC) bands. All three have been observed and analyzed experimentally for a long time, but there was no theory to capture their spatial characteristics and evolution in the process of deformation. By introducing the Laplacian of dislocation density and strain in the standard constitutive equations used for these phenomena, corresponding mathematical models and nonlinear partial differential equations (PDEs) for the governing variable were generated, the solution of which provided for the first time estimates for the wavelengths of the ladder structure of PSBs in Cu single crystals, the thickness of stationary SBs in metals and the spacing of traveling PLC bands in Al-Mg alloys. The present article builds upon the 1987 results of the aforementioned three examples of self-organization in plasticity within a unifying internal length gradient (ILG) framework and expands them in 2 major ways by: (i) introducing the effect of stochasticity and (ii) capturing statistical characteristics when PDEs are absent for the description of experimental observations. The discussion focuses on metallic systems, but the modeling approaches can be used for interpreting experimental observations in a variety of materials. Full article
(This article belongs to the Special Issue Self-Organization in Plasticity of Metals and Alloys)
Show Figures

Figure 1

Back to TopTop