Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (341)

Search Parameters:
Keywords = breaking probability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1217 KB  
Article
Detecting Phase Transitions from Data Using Generative Learning
by Xiyu Zhou, Yan Mi and Pan Zhang
Entropy 2026, 28(4), 406; https://doi.org/10.3390/e28040406 - 3 Apr 2026
Viewed by 194
Abstract
Identifying phase transitions in complex many-body systems traditionally necessitates the definition of specific order parameters, a task often requiring prior knowledge of the statistical model and the symmetry-breaking mechanism. In this work, we propose a framework for detecting phase transitions directly from raw [...] Read more.
Identifying phase transitions in complex many-body systems traditionally necessitates the definition of specific order parameters, a task often requiring prior knowledge of the statistical model and the symmetry-breaking mechanism. In this work, we propose a framework for detecting phase transitions directly from raw (experimental) data without requiring knowledge of the underlying model Hamiltonian, parameters, or pre-defined labels. Inspired by generative modeling in machine learning, our method utilizes autoregressive networks to estimate the normalized probability distribution of the system from raw configuration data. We then quantify the intrinsic sensitivity of this learned distribution to control parameters (such as temperature) to construct a robust indicator of phase transitions. This indicator is based on the expectation of the change in absolute logarithmic probability, derived entirely from the raw data. Our approach is purely data-driven: it takes raw data across varying control parameters as input and outputs the most likely estimate of the phase transition point. To validate our approach, we conduct extensive numerical experiments on the 2D Ising model on both triangular and square lattices, and on the Sherrington–Kirkpatrick (SK) model utilizing raw data generated via Markov Chain Monte Carlo and Tensor Network methods. The results demonstrate that our generative approach accurately identifies phase transitions using only raw data. Our framework provides a general tool for exploring critical phenomena in model systems, with the potential to be extended to realistic experimental data where theoretical descriptions remain incomplete. Full article
Show Figures

Figure 1

32 pages, 8572 KB  
Article
Crisis-Regime Dynamic Volatility Spillovers in U.S. Commodity Markets: A Bayesian Mixture-Identified SVAR Approach
by Xinyan Deng, Kentaka Aruga and Chaofeng Tang
Risks 2026, 14(4), 75; https://doi.org/10.3390/risks14040075 - 31 Mar 2026
Viewed by 162
Abstract
Conventional VAR-based volatility spillover measures rely on homoskedasticity and single-Gaussian assumptions, limiting their ability to capture structural breaks and heterogeneous shocks during crises. This study develops a flexible framework to analyze volatility transmission in U.S. commodity markets under multiple crisis regimes. We propose [...] Read more.
Conventional VAR-based volatility spillover measures rely on homoskedasticity and single-Gaussian assumptions, limiting their ability to capture structural breaks and heterogeneous shocks during crises. This study develops a flexible framework to analyze volatility transmission in U.S. commodity markets under multiple crisis regimes. We propose a Bayesian Structural Vector Autoregressive Mixture Normal (BSVAR-MIX) model that embeds finite normal mixtures within a mixture-based heteroskedastic structural VAR framework. The model combines generalized forecast error variance decomposition with posterior-probability weighting. Daily data for eight U.S. benchmark commodities across food, energy, and precious metals markets are examined over the 2008–2016 global financial crisis and the 2017–2025 multi-crisis period, including COVID-19 and the Russia–Ukraine conflict. The BSVAR-MIX framework provides a flexible descriptive setting for capturing multimodal shocks, heteroskedastic volatility states, and regime-dependent spillover patterns in commodity markets. Empirically, Gold and oil dominate systemic volatility transmission, soybeans amplify food–energy spillovers, while coal and wheat exhibit rising fragility under policy and geopolitical shocks. Assets commonly viewed as safe havens may contribute to systemic stress during extreme events. Overall, the framework offers a robust tool for structural shock identification and cross-commodity risk monitoring relevant to U.S. macroprudential policy. Full article
(This article belongs to the Special Issue Advances in Volatility Modeling and Risk in Markets)
Show Figures

Figure 1

43 pages, 41548 KB  
Article
Spatiotemporal Evolution and Dynamic Driving Mechanisms of Synergistic Rural Revitalization in Topographically Complex Regions: A Case Study of the Qinba Mountains, China
by Haozhe Yu, Jie Wu, Ning Cao, Lijuan Li, Lei Shi and Zhehao Su
Sustainability 2026, 18(7), 3307; https://doi.org/10.3390/su18073307 - 28 Mar 2026
Viewed by 312
Abstract
In ecologically fragile and geomorphologically complex mountainous regions, ensuring a smooth transition from poverty alleviation to multidimensional sustainable rural development remains a key issue in regional governance. Focusing on the Qinba Mountains, a typical former contiguous poverty-stricken region in China covering 18 prefecture-level [...] Read more.
In ecologically fragile and geomorphologically complex mountainous regions, ensuring a smooth transition from poverty alleviation to multidimensional sustainable rural development remains a key issue in regional governance. Focusing on the Qinba Mountains, a typical former contiguous poverty-stricken region in China covering 18 prefecture-level cities in six provinces, this study uses 2009–2023 prefecture-level panel data to examine the spatiotemporal evolution and driving mechanisms of coordinated rural revitalization. An integrated framework of “multi-dimensional evaluation–spatiotemporal tracking–attribution diagnosis” is developed by combining the improved AHP–entropy-weight TOPSIS method, the Coupling Coordination Degree (CCD) model, spatial Markov chains, spatial autocorrelation, and the Geodetector. The results show pronounced subsystem asynchrony. Livelihood and Well-being Security (U5) improves steadily, while Level of Industrial Development (U1), Civic Virtues and Cultural Vibrancy (U3), and Rural Governance (U4) also rise but with clear spatial differentiation; by contrast, Quality of Human Settlements (U2) fluctuates in stages under ecological fragility. Overall, the coupling coordination level advances from the Verge of Imbalance to Intermediate Coordination, yet the regional pattern remains uneven, with eastern basin cities leading and western deep mountainous cities lagging. State transitions display both policy responsiveness and path dependence: the probability of retaining the original state ranges from 50.0% to 90.5%; low-level neighborhoods reduce the upward transition probability to 25%, whereas medium-to-high-level neighborhoods raise the upward transition probability of low-level cities from 36.36% to 53.33%. Spatial dependence is also evident, with Global Moran’s I increasing, with fluctuations, from 0.331 in 2009 to 0.536 in 2023; high-value clusters extend along the Guanzhong Plain–Han River Valley corridor, while low-value clusters remain relatively locked in mountainous border areas. Driving mechanisms show clear stage-wise succession. At the single-factor level, the explanatory power of Road Network Density (F6) declines from 0.639 to 0.287, whereas Terrain Relief Amplitude (F1) becomes the dominant background constraint in the later stage (q = 0.772). Multi-factor interactions are generally enhanced. In particular, the traditional infrastructure-led pathway weakens markedly, with F1 ∩ F6 = 0.055 in 2023, while the interaction between terrain and consumer market vitality becomes dominant, with F1 ∩ F7 = 0.987 in 2023. On this basis, three major pathways are identified: government fiscal intervention and transportation accessibility improvement, capital agglomeration and market demand stimulation, and human–earth system adaptation and ecological value realization. These findings provide quantitative evidence for breaking spatial lock-in and improving cross-regional resource allocation in ecologically constrained mountainous regions. Full article
(This article belongs to the Section Sustainable Urban and Rural Development)
Show Figures

Figure 1

27 pages, 1237 KB  
Article
Constraint, Asymmetry, and Meaning: A Cybernetic Reinterpretation of Probabilistic Emergence Across Complex Systems
by Ezra N. S. Lockhart
Symmetry 2026, 18(3), 518; https://doi.org/10.3390/sym18030518 - 18 Mar 2026
Viewed by 303
Abstract
This study develops a Constraint-Driven Model of Intelligence to explain the emergence of structured meaning in complex systems, reconciling probability and cybernetics. It applies a conceptual–analytic procedure, conducted entirely through logical reasoning and theoretical analysis, without empirical measurement, data acquisition, experimental manipulation, or [...] Read more.
This study develops a Constraint-Driven Model of Intelligence to explain the emergence of structured meaning in complex systems, reconciling probability and cybernetics. It applies a conceptual–analytic procedure, conducted entirely through logical reasoning and theoretical analysis, without empirical measurement, data acquisition, experimental manipulation, or statistical testing, and is therefore methodologically separate from empirical artificial intelligence research. Phenomena such as model collapse are cited as theoretical instances for epistemic argumentation, without asserting empirical verification. Building on Émile Borel’s Infinite Monkey Theorem, which demonstrates the theoretical inevitability of order in unbounded stochastic processes, and Gregory Bateson’s principle of negative explanation, which defines structure as the result of systematically eliminated alternatives, the analysis formalizes how constraints break ergodicity and generate asymmetry. Shannon’s entropy quantifies the informational effects of constraints, while Simon’s bounded rationality and Turing’s algorithmic limits show how cognitive and computational boundaries produce tractable outcomes. Applied to modern AI, the model accounts for model collapse in recursive training, showing that the loss of asymmetric constraints produces low-entropy, repetitive outputs, demonstrating the epistemic necessity of constraint regulation. Comparing probabilistic and cybernetic accounts of emergence, the study shows that structured intelligence arises not from stochastic exploration alone, but from bounded, recursive, selective processes. This model is transdisciplinary, formalizing how constraints from socioeconomic pressures to subcultural circulation shape diversity, innovation, and functional asymmetry, establishing a generalizable cybernetic epistemology for the generation of structured intelligence and meaning across domains. By formalizing these concepts through set-theoretic derivations and integrative synthesis, this non-empirical model advances a cybernetic epistemology, separate from quantitative AI evaluations or experimental designs. Full article
Show Figures

Figure 1

19 pages, 1360 KB  
Article
Workload-Aware Adaptive Duplex Mode Selection for Mobile Ad Hoc Networks: A Workload Zone Estimation Approach
by Zhipeng Feng, Changhao Du and Hongru Zhang
Electronics 2026, 15(6), 1143; https://doi.org/10.3390/electronics15061143 - 10 Mar 2026
Viewed by 241
Abstract
Full-duplex (FD) technology holds great promise for enhancing the spectral efficiency of Mobile Ad Hoc Networks (MANETs) and Wireless Sensor Networks (WSNs). However, the practical performance gain of FD over Half-Duplex (HD) is highly sensitive to the dynamic nature of traffic loads and [...] Read more.
Full-duplex (FD) technology holds great promise for enhancing the spectral efficiency of Mobile Ad Hoc Networks (MANETs) and Wireless Sensor Networks (WSNs). However, the practical performance gain of FD over Half-Duplex (HD) is highly sensitive to the dynamic nature of traffic loads and residual self-interference. Existing Optimal Dynamic Selection Strategies (ODSS) often rely on static workload assumptions within a single time window, failing to capture long-term traffic fluctuations. Consequently, applying instantaneous switching strategies in highly bursty environments necessitates excessively frequent mode switching (e.g., the switching frequency can approach the total number of time windows), incurring prohibitive signaling overhead and unignorable MAC-layer adaptation delays. To overcome these concrete bottlenecks, this paper proposes a comprehensive traffic-aware adaptive duplex mode selection framework. First, we model the multi-scale dynamic workload using Dynamic Activated Probability in Short-term (DAPS) and Long-term (DAPL), effectively characterizing both bursty traffic (via Beta distribution) and Markov-modulated stable traffic. Second, by integrating physical layer performance analysis, we define the Break-even Workload Point (BWP) to partition traffic into Oversaturated (OZ) and Unsaturated (UZ) Workload Zones (WZs). Furthermore, to handle unknown future traffic with low complexity, we propose the Pre-scheduling Duplex selection based on the Workload zone Estimation (PDWE) algorithm. PDWE leverages a Hidden Markov Model (HMM) combined with a Rollout algorithm to estimate hidden traffic states and adaptively pre-schedule duplex modes. Simulation results demonstrate that the proposed strategy achieves near-optimal throughput (approximately 91% of the ideal ODSS) while reducing the duplex switching frequency by two orders of magnitude compared to instantaneous switching strategies. This approach offers a robust cross-layer solution for next-generation self-organizing networks. Full article
(This article belongs to the Special Issue Technology of Mobile Ad Hoc Networks)
Show Figures

Figure 1

19 pages, 2607 KB  
Article
Non-Hermitian Dynamics in Three-Level Systems: A Perturbative Approach for Time-Dependent Hamiltonians
by Guixiang La, Yexin Li and Gongping Zheng
Entropy 2026, 28(3), 268; https://doi.org/10.3390/e28030268 - 28 Feb 2026
Viewed by 366
Abstract
The conventional time-dependent perturbation theory in quantum mechanics is established within the framework of Hermitian Hamiltonians, applicable for describing quantum transitions and associated energy level responses in such systems. However, this theory has fundamental limitations when applied to non-Hermitian systems. Consequently, researchers have [...] Read more.
The conventional time-dependent perturbation theory in quantum mechanics is established within the framework of Hermitian Hamiltonians, applicable for describing quantum transitions and associated energy level responses in such systems. However, this theory has fundamental limitations when applied to non-Hermitian systems. Consequently, researchers have systematically extended time-dependent perturbation theory to non-Hermitian systems, establishing a corresponding mature framework. Building on this foundation, this study extends the theory to investigate the transition dynamics induced by non-Hermitian interactions in non-Hermitian Hamiltonian systems. We employ a biorthogonal basis representation for a three-level non-Hermitian system. This work investigates a system comprising an unperturbed static non-Hermitian Hamiltonian and a periodically driven time-dependent perturbation Hamiltonian. Taking the three-level system as a concrete example, we combine analytical methods with numerical simulations to solve and analyze its dynamical evolution equations. These complementary approaches reveal that when system parameters complete a full cycle around an exceptional point, the transitional behavior exhibits specific evolutionary patterns. In this system, quantum transition probabilities exhibit significant asymmetry and non-conservation that depend on the initial and final states, revealing inherent directional characteristics in the dynamical process. Furthermore, for a three-level, periodically driven non-Hermitian system with time-dependent perturbations, this asymmetry is even more pronounced, manifesting as a distinct disparity between forward and reverse transition probabilities. The periodic driving actively amplifies the asymmetry in the transition process. By designing the perturbation spectrum, selective manipulation of specific quantum states can be achieved. Moreover, transition probabilities can be significantly enhanced under resonance conditions, while non-Hermiticity further breaks the system’s inherent symmetry, leading to substantial amplification of transitions in a single direction. By precisely tuning the drive frequency, interactions between specific coupling channels can be selectively enhanced or suppressed. The amplification of channel asymmetry by non-Hermitian properties provides a novel mechanism for directional control of quantum states and opens new pathways for realizing related quantum technologies. Full article
Show Figures

Figure 1

18 pages, 3503 KB  
Article
Numerical Simulation of Air-Water-Mineral Three-Phase Flow in a Flotation Column for Graphite
by Zhineng Liu, Jun Wang, Dongfang Lu, Hongchang Liu, Baojun Yang, Rui Liao, Lianjun Wu and Guanzhou Qiu
Minerals 2026, 16(3), 254; https://doi.org/10.3390/min16030254 - 28 Feb 2026
Viewed by 248
Abstract
This study aims to clarify the influence mechanism of air–water–mineral three-phase flow behavior on separation efficiency in a graphite flotation column, addressing the issues of over-breaking of coarse graphite flakes and low recovery of fine particles caused by mismatched flow fields and operating [...] Read more.
This study aims to clarify the influence mechanism of air–water–mineral three-phase flow behavior on separation efficiency in a graphite flotation column, addressing the issues of over-breaking of coarse graphite flakes and low recovery of fine particles caused by mismatched flow fields and operating parameters in traditional flotation columns. Using CFD numerical simulations based on the Eulerian multiphase flow model, the standard k-ε turbulence model, and scalable wall functions, the effects of feed velocity (0.8–2.4 m/s) and aeration velocity (1–5 m/s) on the flow field structure, gas holdup distribution, and weighted average bubble–particle collision probability inside the column were systematically analyzed. Key quantitative results show that under the synergistic condition of a feed velocity of 2 m/s and an aeration velocity of 3 m/s, an internal circulation flow field conducive to particle retention is formed. Under these conditions, the gas holdup in the collection zone reaches an optimal range (0.26–0.27), and the weighted average collision probability increases by approximately 22% compared to the baseline condition. Aeration velocity shows a significant positive correlation with gas holdup in the collection zone (~0.235 at 1 m/s, rising to ~0.285 at 5 m/s). While an increase in feed velocity reduces the overall gas volume fraction, it enhances turbulence and promotes uniform bubble dispersion through the spatial distribution of regions with high collision probability from the upper part to the upper–middle part of the column and improves the uniformity of distribution. The novelty of this study lies in being the first to quantitatively reveal, through CFD simulation, the coupled regulatory effects of feed velocity and aeration velocity on the stratified flow field structure and mineralization probability in a flotation column and to identify the key optimization threshold of “2 m/s feed velocity”. The practical significance is that it provides a clear theoretical basis and operational window for energy saving, consumption reduction, and process intensification in industrial flotation columns. It offers directly applicable parameter optimization strategies for the efficient recovery of fine-flake graphite and the protection of coarse flakes. Full article
Show Figures

Figure 1

23 pages, 694 KB  
Article
Statistical Applications of the Ujlayan–Dixit Fractional Lomax Probability Distribution
by Nesreen M. Al-Olaimat, Mohammad A. Amleh, Baha’ Abughazaleh, Rania Saadeh and Mohamed Hafez
Fractal Fract. 2026, 10(3), 155; https://doi.org/10.3390/fractalfract10030155 - 27 Feb 2026
Viewed by 295
Abstract
The Ujlayan–Dixit (UD) fractional calculus provides a powerful fractional extension of the Lomax distribution, offering a suitable framework for representing complex behaviors beyond classical approaches. In this paper, we adopt the UD fractional Lomax distribution and establish its statistical theory. Based on the [...] Read more.
The Ujlayan–Dixit (UD) fractional calculus provides a powerful fractional extension of the Lomax distribution, offering a suitable framework for representing complex behaviors beyond classical approaches. In this paper, we adopt the UD fractional Lomax distribution and establish its statistical theory. Based on the adopted density, we derive closed-form expressions for the cumulative distribution, survival, and hazard functions, as well as the mode. Several UD fractional statistical measures of the Lomax random variable are derived, including the fractional moments, fractional information theoretic measures, including UD fractional Shannon and Tsallis entropy measures, and the probability density function of the kth order statistic under the UD fractional framework. Finally, a real data application concerning the time to break down an insulating fluid is used to illustrate the usefulness of the proposed distribution in modeling real data applications. The fitting performance of the suggested model is compared with several extensions of the Lomax distribution. The comparative results show that the UD fractional Lomax distribution outperforms several well-known extensions of Lomax distribution. This framework provides researchers with many robust tools for advanced reliability assessment, uncertainty quantification, and risk modeling, providing insights into phenomena not captured by the classical Lomax distribution. Moreover, when the fractional parameter q1, the proposed approach converges to the classical Lomax results, bridging fractional and classical perspectives. Full article
(This article belongs to the Section Probability and Statistics)
Show Figures

Figure 1

27 pages, 2051 KB  
Review
Environmental Substances Associated with Neurodegeneration: An Overview of Parkinson’s Disease and Related Genotoxic Endpoints
by Mohammad Shoeb, Breanna Alman, Harpriya Kaur, Moon Han, Fahim Atif, William Wu Kim, Siddhi Desai, Patricia Ruiz and Gregory M. Zarus
Genes 2026, 17(2), 236; https://doi.org/10.3390/genes17020236 - 13 Feb 2026
Viewed by 1112
Abstract
Parkinson’s disease (PD) is a complex neurodegenerative disorder influenced by age, genetic predispositions, and environmental exposures, with a growing global incidence. This review aims to summarize findings from ATSDR Toxicological Profiles, EPA Risk Assessments, and other sources of peer-reviewed literature to examine the [...] Read more.
Parkinson’s disease (PD) is a complex neurodegenerative disorder influenced by age, genetic predispositions, and environmental exposures, with a growing global incidence. This review aims to summarize findings from ATSDR Toxicological Profiles, EPA Risk Assessments, and other sources of peer-reviewed literature to examine the potential associations between PD and select metals, pesticides, and chlorinated organic compounds. Additionally, it explores using computational toxicology methods to elucidate the interactions between specific chemicals, associated genes, and their possible roles in PD. A total of 29 substances were identified to be neurotoxic with direct or probable association with PD. Risk of disease onset or symptom exacerbation of PD has been linked to exposures to neurodegenerative metals, pesticides, chlorinated organic compounds, and other environmental toxicants, alongside intrinsic factors such as genetic predisposition and aging. Supporting evidence from neurotoxicological studies directly or possibly associated with PD are summarized in referenced toxicological profiles and EPA risk assessments. Genotoxic endpoints evaluated in exposure-induced neurodegeneration including oxidative stress, DNA strand breaks, mitochondrial dysfunction, impaired DNA repair, and telomere alterations may play a critical role in linking environmental exposures to PD pathogenesis. Although these endpoints represent imperative data gaps between environmental and genetic risk factors for PD, isolating individual substances may not be necessary for prevention, as many co-occur at contaminated sites or within certain occupations. Further research is needed to clarify causal relationships between environmental exposure and genotoxic endpoints seen in neurodegenerative processes that can also be seen in PD for consideration in the development of preventive and therapeutic strategies. Full article
(This article belongs to the Section Neurogenomics)
Show Figures

Figure 1

24 pages, 4307 KB  
Article
Stochastic Neuromorphic Computing Architecture Based on Voltage-Controlled Probabilistic Switching Magnetic Tunnel Junction (MTJ) Devices
by Liang Gao, Chenxi Wang and Yanfeng Jiang
Micromachines 2026, 17(2), 216; https://doi.org/10.3390/mi17020216 - 5 Feb 2026
Viewed by 487
Abstract
As integrated circuits face increasingly stringent demands regarding power consumption, area, and stability, integrating novel spintronic devices with computing architectures has become a crucial direction for breaking through traditional computing paradigms. In the paper, switching mechanism of Magnetic Tunnel Junctions (MTJs) under the [...] Read more.
As integrated circuits face increasingly stringent demands regarding power consumption, area, and stability, integrating novel spintronic devices with computing architectures has become a crucial direction for breaking through traditional computing paradigms. In the paper, switching mechanism of Magnetic Tunnel Junctions (MTJs) under the synergistic effect of Voltage-Controlled Magnetic Anisotropy (VCMA) and the Spin Hall Effect (SHE) is investigated. VCMA-assisted switching SHE-MTJ device is adopted, and a macrospin approximation model is established based on the Landau-Lifshitz-Gilbert (LLG) equation to systematically analyze its dynamic characteristics. The research demonstrates that applying VCMA voltage pulses with appropriate amplitude and width can significantly reduce the required spin Hall current density and pulse width for switching, thereby effectively minimizing ohmic losses and Joule heating. Furthermore, by incorporating a thermal fluctuation field, voltage-controlled SHE-MTJ device with stochastic switching behavior can be constructed, obtaining an approximately sigmoidal voltage-probability response curve. This provides an ideal physical foundation for stochastic computing and neuromorphic computing. Based on the above established fundamental discovery, an in-memory computing architecture supporting binarized Convolutional Neural Networks (CNNs) is proposed and designed in the paper. Combined with the lightweight network SqueezeNet, this architecture achieves a Top-1 recognition accuracy of 72.49% on the CIFAR-10 dataset, with a parameter count of only 1.25 × 106. This work offers a feasible spintronic implementation scheme for low-power, high-energy-efficiency edge-side intelligent chips. Full article
Show Figures

Figure 1

29 pages, 5239 KB  
Article
Density Functional Theory Study of the Photocatalytic Degradation of Penicillin by Nanocrystalline TiO2
by Corneliu I. Oprea, Robert M. Solomon and Mihai A. Gîrțu
Catalysts 2026, 16(2), 171; https://doi.org/10.3390/catal16020171 - 5 Feb 2026
Viewed by 810
Abstract
A promising route for removing antibiotics such as penicillin from wastewater is photocatalytic degradation under UV irradiation using TiO2 nanoparticles. However, the microscopic mechanisms governing the initial degradation steps remain poorly understood. In particular, it is still unclear whether degradation preferentially occurs [...] Read more.
A promising route for removing antibiotics such as penicillin from wastewater is photocatalytic degradation under UV irradiation using TiO2 nanoparticles. However, the microscopic mechanisms governing the initial degradation steps remain poorly understood. In particular, it is still unclear whether degradation preferentially occurs in solution or upon adsorption on the oxide surface, and which molecular sites are most vulnerable to attack in solution compared to those activated on the catalyst. In this work, we introduce a unified density functional theory approach that treats penicillin V (phenoxymethylpenicillin) consistently, both isolated in solution and adsorbed on an anatase TiO2 nanocluster, enabling a direct comparison between solution-phase and surface-mediated degradation pathways. Within this framework, we analyze the adsorption configurations, energy-level alignment, charge-transfer pathways, UV-Vis absorption properties, local reactivity descriptors, and the initial steps leading to bond breaking. The results show that the direct photoexcitation of PenV followed by electron transfer to the oxide is less likely, due to the high energy of the pollutant’s excited states. In contrast, degradation initiated by the transfer of photogenerated holes from the catalyst to the adsorbed antibiotic appears more probable, driven by the smaller energetic offset and by the hybridization between molecular and oxide states. Overall, adsorption on the oxide surface appears to be more conducive to degradation, with the carbon atom in the β-lactam ring consistently identified as a susceptible site for attack across different environments. Full article
(This article belongs to the Special Issue Advances in Photocatalytic Degradation, 2nd Edition)
Show Figures

Graphical abstract

37 pages, 5937 KB  
Article
A Multi-Task Service Composition Method Considering Inter-Task Fairness in Cloud Manufacturing
by Zhou Fang, Yanmeng Ying, Qian Cao, Dongsheng Fang and Daijun Lu
Symmetry 2026, 18(2), 238; https://doi.org/10.3390/sym18020238 - 29 Jan 2026
Viewed by 383
Abstract
Within the cloud manufacturing paradigm, Cloud Manufacturing Service Composition (CMSC) is a core technology for intelligent resource orchestration in Cloud Manufacturing Platforms (CMP). However, existing research faces critical limitations in real-world CMP operations: single-task-centric optimization ignores resource sharing/competition among coexisting manufacturing tasks (MTs), [...] Read more.
Within the cloud manufacturing paradigm, Cloud Manufacturing Service Composition (CMSC) is a core technology for intelligent resource orchestration in Cloud Manufacturing Platforms (CMP). However, existing research faces critical limitations in real-world CMP operations: single-task-centric optimization ignores resource sharing/competition among coexisting manufacturing tasks (MTs), causing performance degradation and resource “starvation”; traditional heuristics require full re-execution for new scenarios, failing to support real-time online decision-making; single-agent reinforcement learning (RL) lacks mechanisms to balance global efficiency and inter-task fairness, suffering from inherent fairness defects. To address these challenges, this paper proposes a fairness-aware multi-task CMSC method based on Multi-Agent Reinforcement Learning (MARL) under the Centralized Training with Decentralized Execution (CTDE) framework, targeting the symmetry-breaking issue of uneven resource allocation among MTs and aiming to achieve symmetry restoration by restoring relative balance in resource acquisition. The method constructs a multi-task CMSC model that captures real-world resource sharing/competition among concurrent MTs, and integrates a centralized global coordination agent into the MARL framework (with independent task agents per MT) to dynamically regulate resource selection probabilities, overcoming single-agent fairness defects while preserving distributed autonomy. Additionally, a two-layer attention mechanism is introduced—task-level self-attention for intra-task subtask correlations and global state self-attention for critical resource features—enabling precise synergy between local task characteristics and global resource states. Experiments verify that the proposed method significantly enhances inter-task fairness while maintaining superior global Quality of Service (QoS), demonstrating its effectiveness in balancing efficiency and fairness for dynamic multi-task CMSC. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 9088 KB  
Article
GMM-Enhanced Mixture-of-Experts Deep Learning for Impulsive Dam-Break Overtopping at Dikes
by Hanze Li, Yazhou Fan, Luqi Wang, Xinhai Zhang, Xian Liu and Liang Wang
Water 2026, 18(3), 311; https://doi.org/10.3390/w18030311 - 26 Jan 2026
Viewed by 389
Abstract
Impulsive overtopping generated by dam-break surges is a critical hazard for dikes and flood-protection embankments, especially in reservoirs and mountainous catchments. Unlike classical coastal wave overtopping, which is governed by long, irregular wave trains and usually characterized by mean overtopping discharge over many [...] Read more.
Impulsive overtopping generated by dam-break surges is a critical hazard for dikes and flood-protection embankments, especially in reservoirs and mountainous catchments. Unlike classical coastal wave overtopping, which is governed by long, irregular wave trains and usually characterized by mean overtopping discharge over many waves, these dam-break-type events are dominated by one or a few strongly nonlinear bores with highly transient overtopping heights. Accurately predicting the resulting overtopping levels under such impulsive flows is therefore important for flood-risk assessment and emergency planning. Conventional cluster-then-predict approaches, which have been proposed in recent years, often first partition data into subgroups and then train separate models for each cluster. However, these methods often suffer from rigid boundaries and ignore the uncertainty information contained in clustering results. To overcome these limitations, we propose a GMM+MoE framework that integrates Gaussian Mixture Model (GMM) soft clustering with a Mixture-of-Experts (MoE) predictor. GMM provides posterior probabilities of regime membership, which are used by the MoE gating mechanism to adaptively assign expert models. Using SPH-simulated overtopping data with physically interpretable dimensionless parameters, the framework is benchmarked against XGBoost, GMM+XGBoost, MoE, and Random Forest. Results show that GMM+MoE achieves the highest accuracy (R2=0.9638 on the testing dataset) and the most centralized residual distribution, confirming its robustness. Furthermore, SHAP-based feature attribution reveals that relative propagation distance and wave height are the dominant drivers of overtopping, providing physically consistent explanations. This demonstrates that combining soft clustering with adaptive expert allocation not only improves accuracy but also enhances interpretability, offering a practical tool for dike safety assessment and flood-risk management in reservoirs and mountain river valleys. Full article
Show Figures

Figure 1

25 pages, 2812 KB  
Article
Field-Scale Techno-Economic Assessment and Real Options Valuation of Carbon Capture Utilization and Storage—Enhanced Oil Recovery Project Under Market Uncertainty
by Chang Liu, Cai-Shuai Li and Xiao-Qiang Zheng
Sustainability 2026, 18(2), 805; https://doi.org/10.3390/su18020805 - 13 Jan 2026
Viewed by 594
Abstract
This study develops a field-based techno-economic model and decision framework for a CO2-enhanced oil recovery and storage project under joint market uncertainty. Historical drilling and completion expenditures calibrate investment cost functions, and three years of production data are fitted with segmented [...] Read more.
This study develops a field-based techno-economic model and decision framework for a CO2-enhanced oil recovery and storage project under joint market uncertainty. Historical drilling and completion expenditures calibrate investment cost functions, and three years of production data are fitted with segmented hyperbolic Arps curves to forecast 20-year oil output. Markov-chain models jointly generate internally consistent pathways for crude oil, ETA, and purchased CO2 prices, which are embedded in a Monte Carlo valuation. The framework outputs probability distributions of NPV and deferral option value; under the mid scenario, their mean values are USD 18.1M and USD 2.0M, respectively. PRCC-based global sensitivity analysis identifies the dominant value drivers as oil price, CO2 price, utilization factor, oil density, pipeline length, and injection volume. Techno-economic boundary maps in the joint oil and CO2 price space then delineate feasible regions and break-even thresholds for key design parameters. Results indicate that CCUS-EOR viability cannot be inferred from oil price or any single cost factor alone, but requires coordinated consideration of subsurface constraints, engineering configuration, and multi-market dynamics, including the value of waiting in unfavorable regimes, contributing to low-carbon development and sustainable energy transition objectives. Full article
Show Figures

Figure 1

20 pages, 2775 KB  
Article
Enhancing Statistical Modeling with the Marshall–Olkin Unit-Exponentiated-Half-Logistic Distribution: Theoretical Developments and Real-World Applications
by Ömer Özbilen
Symmetry 2025, 17(12), 2084; https://doi.org/10.3390/sym17122084 - 4 Dec 2025
Viewed by 387
Abstract
This paper introduces the Marshall–Olkin unit-exponentiated-half-logistic (MO-UEHL) distribution, a novel three-parameter model designed to enhance the flexibility of the unit-exponentiated-half-logistic distribution through the incorporation of the Marshall–Olkin transformation. Defined on the unit interval (0,1), the MO-UEHL distribution is [...] Read more.
This paper introduces the Marshall–Olkin unit-exponentiated-half-logistic (MO-UEHL) distribution, a novel three-parameter model designed to enhance the flexibility of the unit-exponentiated-half-logistic distribution through the incorporation of the Marshall–Olkin transformation. Defined on the unit interval (0,1), the MO-UEHL distribution is well-suited for modeling proportional data exhibiting asymmetry. The Marshall–Olkin tilt parameter α explicitly controls the degree and direction of asymmetry, enabling the density to range from highly right-skewed to nearly symmetric unimodal forms, and even to left-skewed configurations for certain parameter values, thereby offering a direct mathematical representation of symmetry breaking in bounded proportional data. The resulting model achieves this versatility without relying on exponential terms or special functions, thus simplifying computational procedures. We derive its key mathematical properties, including the probability density function, cumulative distribution function, survival function, hazard rate function, quantile function, moments, and information-theoretic measures such as the Shannon and residual entropy. Parameter estimation is explored using maximum likelihood, maximum product spacing, ordinary and weighted least-squares, and Cramér–von Mises methods, with simulation studies evaluating their performance across varying sample sizes and parameter sets. The practical utility of the MO-UEHL distribution is demonstrated through applications to four real datasets from environmental and engineering contexts. The results highlight the MO-UEHL distribution’s potential as a valuable tool in reliability analysis, environmental modeling, and related fields. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

Back to TopTop