Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (99)

Search Parameters:
Keywords = Pareto distribution type I

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 490 KB  
Article
Statistical Analysis of NO2 Emissions from Eskom’s Majuba Coal-Fired Power Station in Mpumalanga, South Africa
by Mpendulo Wiseman Mamba and Delson Chikobvu
Atmosphere 2026, 17(4), 415; https://doi.org/10.3390/atmos17040415 - 19 Apr 2026
Viewed by 64
Abstract
Gaseous emissions from coal combustion during electricity generation continue to be a challenge in South Africa. To meet the regulatory limits, it is crucial to understand the statistical distribution of such emissions from the power generating plants. The current paper characterises the nitrogen [...] Read more.
Gaseous emissions from coal combustion during electricity generation continue to be a challenge in South Africa. To meet the regulatory limits, it is crucial to understand the statistical distribution of such emissions from the power generating plants. The current paper characterises the nitrogen dioxide (NO2) emissions from Eskom’s Majuba coal-fired power station by making use of the quantile–quantile (QQ) plots and derivative plots of three statistical parent distributions, namely, the Weibull, Lognormal, and Pareto distributions. These distributions are fitted and compared according to their tail heaviness as they cater for data that may have tails lighter or heavier than that of the Exponential distribution. Of the three distributions evaluated here, the Lognormal gave the best fit for the full body of the data according to the QQ and derivative plots, and the goodness-of-fit tools (bootstrap Kolmogorov–Smirnov (KS), Anderson–Darling (AD), Akaike Information Criterion (AIC), Schwarz’s Bayesian Information Criterion (BIC), and the BIC-corrected Vuong test for non-nested distributions). The Lognormal distribution also gave the best fit for the overall upper tail, while at the very top six largest NO2 emission observations in the upper tail, a Pareto-type tail was observed. The practical implication of a heavy tail like the Pareto is that it models more frequent larger sized NO2 emissions compared to lighter tails like the Weibull and Lognormal tails. The methods used in this study give a framework on how emissions of NO2 from a coal-fired power station can be modelled using statistical parent distributions whilst also taking into account the distribution of the data in the tails which is mostly ignored when fitting statistical parent distributions. Understanding the distribution of the upper tail is very important since higher and rare emissions are of the most concern and are dangerous to human health and the environment. Full article
(This article belongs to the Special Issue Modeling and Monitoring of Air Quality: From Data to Predictions)
17 pages, 592 KB  
Article
Modelling Extreme Losses in JSE Life Insurance Price Index Growth Rates Using the Generalised Extreme Value Distribution (GEVD) and the Generalised Pareto Distribution (GPD)
by Delson Chikobvu, Tendai Makoni and Frans Frederik Koning
Data 2026, 11(4), 86; https://doi.org/10.3390/data11040086 - 16 Apr 2026
Viewed by 194
Abstract
The life insurance sector plays a critical role in financial system stability but is inherently exposed to extreme market fluctuations due to long-term liabilities and asset–liability mismatches. This study investigates extreme losses in the growth rates of the JSE Life Insurance Price Index [...] Read more.
The life insurance sector plays a critical role in financial system stability but is inherently exposed to extreme market fluctuations due to long-term liabilities and asset–liability mismatches. This study investigates extreme losses in the growth rates of the JSE Life Insurance Price Index (LIPI) using the Generalised Extreme Value Distribution (GEVD) and the Generalised Pareto Distribution (GPD) under the Extreme Value Theory (EVT) framework. Monthly data from January 2000 to October 2023 were transformed into a loss series, and extreme events were captured using quarterly block maxima and a POT threshold at the 95th percentile. Model parameters were estimated through Maximum Likelihood Estimation, and downside risk was assessed using return levels, Value-at-Risk (VaR), and Tail Value-at-Risk (tVaR). The GEVD model produced a negative shape parameter, consistent with a bounded Weibull-type tail, while the GPD indicated a heavy-tailed distribution. Return level estimates show escalating loss magnitudes and widening uncertainty over longer horizons, reflecting the challenges of projecting rare events. Kupiec backtesting confirms the adequacy and reliability of the GEVD-based VaR across all confidence levels, whereas the GPD underestimates risk at lower thresholds. These findings indicate significant tail risk within the South African life insurance equity segment and underscore the importance of EVT-based risk measures for capital planning and regulatory oversight. The study contributes to financial risk modelling in the life insurance sector and offers practical insights for strengthening solvency assessment and enterprise risk management frameworks. Full article
Show Figures

Figure 1

16 pages, 1800 KB  
Article
Navigating Extreme Market Fluctuations: Asset Allocation Strategies in Developed vs. Emerging Economies
by Lumengo Bonga-Bonga
Econometrics 2026, 14(1), 16; https://doi.org/10.3390/econometrics14010016 - 17 Mar 2026
Viewed by 458
Abstract
This paper examines how assets from emerging and developed stock markets can be efficiently allocated during periods of financial crisis by integrating traditional portfolio theory with Extreme Value Theory (EVT), using the Generalized Pareto Distribution (GPD) and Generalized Extreme Value (GEV) approaches to [...] Read more.
This paper examines how assets from emerging and developed stock markets can be efficiently allocated during periods of financial crisis by integrating traditional portfolio theory with Extreme Value Theory (EVT), using the Generalized Pareto Distribution (GPD) and Generalized Extreme Value (GEV) approaches to model tail risks. This study evaluates mean-variance portfolios constructed under each EVT framework and finds that portfolios based on GPD estimates consistently favour emerging market assets, which outperform both developed market and internationally diversified portfolios during extreme market conditions. In contrast, GEV-based portfolios indicate superior performance for developed market assets, highlighting the distinct behaviour of returns in the upper and lower tails of the distribution. These contrasting results reveal the unique nature of safe-haven characteristics associated with developed economies, the assets of which demonstrate greater stability and resilience during episodes of financial stress. By showing how tail-risk modelling alters optimal portfolio weights across market types, this paper contributes new evidence to the literature on crisis-informed asset allocation and offers practical insights for investors seeking robust diversification strategies under extreme market fluctuations. Full article
Show Figures

Figure 1

36 pages, 997 KB  
Article
Genetic Algorithms for Pareto Optimization in Bayesian Cournot Games Under Incomplete Cost Information
by David Carfí, Alessia Donato and Emanuele Perrone
Mathematics 2026, 14(5), 762; https://doi.org/10.3390/math14050762 - 25 Feb 2026
Viewed by 429
Abstract
This paper develops a practical computational framework for the Bayesian Cournot model with bilateral incomplete cost information, where each player is uncertain about the opponent’s marginal cost, drawn from a continuous compact interval [c*, c*] with [...] Read more.
This paper develops a practical computational framework for the Bayesian Cournot model with bilateral incomplete cost information, where each player is uncertain about the opponent’s marginal cost, drawn from a continuous compact interval [c*, c*] with 0<c*<c*<. The infinite dimensionality of the functional strategy spaces (mappings from types to production quantities) renders analytical closed-form solutions infeasible in this continuous-type setting. To overcome this challenge, we restrict the strategy spaces to finite-dimensional differentiable sub-manifolds—specifically, one-parameter families of oscillatory functions (cosine, sine, and mixed forms). After suitable affine Q-rescaling to map the oscillatory range into the production interval [0, Q], and with parameter ranges satisfying α, β>(π/2)/c*, these curves ensure near-exhaustivity: the joint production map (α, β)(xα(s), yβ(t)) covers [0, Q]2 densely for every fixed cost pair (s, t), thereby recovering (up to density and closure) the full ex-post payoff space. We introduce the ex-post payoff mapping Φ(s, t, x, y)=(es(x, y)(t), ft(x, y)(s)), which collects every realizable payoff pair once nature draws the types and players select their strategies. The image of Φ defines the general payoff space of the game, and its non-dominated points constitute the general ex-post Pareto frontier—all efficient realized outcomes across type-strategy realizations, without dependence on private probability measures over types. Using multi-objective genetic algorithms, we numerically approximate this frontier (and selected collusive compromises) within the restricted but representative sub-manifolds. The resulting frontiers are computationally accessible, robust to parameter variations, and validated through hypervolume convergence, sensitivity analysis, and comparisons with NSGA-II, PSO and scalarization methods. The findings are significant because they provide decision-makers in oligopolistic markets (e.g., electric vehicles) with viable, implementable production policies that explore efficient trade-offs under genuine cost uncertainty, without requiring explicit forecasts of the opponent’s type distribution—a limitation of traditional expected-utility approaches. By focusing on ex-post efficiency, the method reveals belief-independent compromise solutions that may guide tacit coordination or collusive outcomes in real-world strategic settings. Full article
(This article belongs to the Special Issue AI in Game Theory: Theory and Applications)
Show Figures

Figure 1

23 pages, 3492 KB  
Article
Multi-Objective Reinforcement Learning for Virtual Impedance Scheduling in Grid-Forming Power Converters Under Nonlinear and Transient Loads
by Jianli Ma, Kaixiang Peng, Xin Qin and Zheng Xu
Energies 2025, 18(24), 6621; https://doi.org/10.3390/en18246621 - 18 Dec 2025
Viewed by 557
Abstract
Grid-forming power converters play a foundational role in modern microgrids and inverter-dominated distribution systems by establishing voltage and frequency references during islanded or low-inertia operation. However, when subjected to nonlinear or impulsive impact-type loads, these converters often suffer from severe harmonic distortion and [...] Read more.
Grid-forming power converters play a foundational role in modern microgrids and inverter-dominated distribution systems by establishing voltage and frequency references during islanded or low-inertia operation. However, when subjected to nonlinear or impulsive impact-type loads, these converters often suffer from severe harmonic distortion and transient current overshoot, leading to waveform degradation and protection-triggered failures. While virtual impedance control has been widely adopted to mitigate these issues, conventional implementations rely on fixed or rule-based tuning heuristics that lack adaptivity and robustness under dynamic, uncertain conditions. This paper proposes a novel reinforcement learning-based framework for real-time virtual impedance scheduling in grid-forming converters, enabling simultaneous optimization of harmonic suppression and impact load resilience. The core of the methodology is a Soft Actor-Critic (SAC) agent that continuously adjusts the converter’s virtual impedance tensor—comprising dynamically tunable resistive, inductive, and capacitive elements—based on real-time observations of voltage harmonics, current derivatives, and historical impedance states. A physics-informed simulation environment is constructed, including nonlinear load models with dominant low-order harmonics and stochastic impact events emulating asynchronous motor startups. The system dynamics are modeled through a high-order nonlinear framework with embedded constraints on impedance smoothness, stability margins, and THD compliance. Extensive training and evaluation demonstrate that the learned impedance policy effectively reduces output voltage total harmonic distortion from over 8% to below 3.5%, while simultaneously limiting current overshoot during impact events by more than 60% compared to baseline methods. The learned controller adapts continuously without requiring explicit load classification or mode switching, and achieves strong generalization across unseen operating conditions. Pareto analysis further reveals the multi-objective trade-offs learned by the agent between waveform quality and transient mitigation. Full article
Show Figures

Figure 1

22 pages, 13704 KB  
Article
Application of Metaheuristic Optimisation Techniques for the Optimisation of a Solid-State Circuit Breaker
by Adam P. Lewis, Gerardo Calderon-Lopez, Ingo Lüdtke, Jason Vincent-Newson, Sahil Upadhaya, Jas Singh and Matt Grubb
Appl. Sci. 2025, 15(24), 12983; https://doi.org/10.3390/app152412983 - 9 Dec 2025
Viewed by 614
Abstract
Designing solid-state circuit breakers (SSCBs) involves a large discrete design space spanning MOSFET type, bypass configuration, and heatsink selection. This work formulates SSCB design as a multi-objective combinatorial optimisation problem that minimises conduction loss and material cost subject to electrothermal feasibility constraints. A [...] Read more.
Designing solid-state circuit breakers (SSCBs) involves a large discrete design space spanning MOSFET type, bypass configuration, and heatsink selection. This work formulates SSCB design as a multi-objective combinatorial optimisation problem that minimises conduction loss and material cost subject to electrothermal feasibility constraints. A validated electrothermal model was developed using experimentally measured RDSon(T) data and thermal-impedance characterisation, allowing rapid and accurate evaluation of candidate configurations. Because the full design space exceeds one million combinations, five representative metaheuristic algorithms: Genetic Algorithm (GA), Particle Swarm Optimisation (PSO), Grey Wolf Optimisation (GWO), Ant Colony Optimisation (ACO), and Gorilla Troops Optimisation (GTO), were benchmarked under an identical computational budget of 2000 evaluations. Sobol sequence initialisation was used to enhance search diversity. Each algorithm was executed 100 times, and its performance was quantitatively assessed using hypervolume, generational distance (GD), inverted generational distance (IGD), Hausdorff distance, overlapping-point score (OP), overall spread (OS), and distribution metrics (DM). GA consistently produced the closest approximation to the true Pareto front obtained from brute-force enumeration, achieving superior accuracy, coverage, and robustness. GTO offered strong secondary performance, while PSO, GWO, and ACO delivered partial front reconstruction. The results demonstrate that metaheuristic optimisation, particularly GA, can reduce SSCB design time significantly while retaining high fidelity, offering a scalable and efficient framework for future power-electronics design tasks. Full article
(This article belongs to the Special Issue New Challenges in Low-Power Electronics Design)
Show Figures

Figure 1

22 pages, 603 KB  
Article
Generation of Natural-Language Explanations for Static-Analysis Warnings Using Single- and Multi-Objective Optimization
by Ivan Malashin
Computers 2025, 14(12), 534; https://doi.org/10.3390/computers14120534 - 5 Dec 2025
Viewed by 1240
Abstract
Explanations for static-analysis warnings assist developers in understanding potential code issues. An end-to-end pipeline was implemented to generate natural-language explanations, evaluated on 5183 warning–explanation pairs from Java repositories, including a manually validated gold subset of 1176 examples for faithfulness assessment. Explanations were produced [...] Read more.
Explanations for static-analysis warnings assist developers in understanding potential code issues. An end-to-end pipeline was implemented to generate natural-language explanations, evaluated on 5183 warning–explanation pairs from Java repositories, including a manually validated gold subset of 1176 examples for faithfulness assessment. Explanations were produced by a transformer-based encoder–decoder model (CodeT5) conditioned on warning types, contextual code snippets, and static-analysis evidence. Initial experiments employed single-objective optimization for hyperparameters (using a genetic algorithm with dynamic search-space correction, which adaptively adjusted search bounds based on the evolving distribution of candidate solutions, clustering promising regions, and pruning unproductive ones), but this approach enforced a fixed faithfulness–fluency trade-off; therefore, a multi-objective evolutionary algorithm (NSGA-II) was adopted to jointly optimize both criteria. Pareto-optimal configurations improved normalized faithfulness by up to 12% and textual quality by 5–8% compared to baseline CodeT5 settings, with batch sizes of 10–21, learning rates 2.3×105 to 5×104, maximum token lengths of 36–65, beam width 5, length penalty 1.15, and nucleus sampling p=0.88. Candidate explanations were reranked using a composite score of likelihood, faithfulness, and code-usefulness, producing final outputs in under 0.001 s per example. The results indicate that structured conditioning, evolutionary hyperparameter search, and reranking yield explanations that are both aligned with static-analysis evidence and linguistically coherent. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

28 pages, 1641 KB  
Article
Bayesian Estimation of R-Vine Copula with Gaussian-Mixture GARCH Margins: An MCMC and Machine Learning Comparison
by Rewat Khanthaporn and Nuttanan Wichitaksorn
Mathematics 2025, 13(23), 3886; https://doi.org/10.3390/math13233886 - 4 Dec 2025
Viewed by 1022
Abstract
This study proposes Bayesian estimation of multivariate regular vine (R-vine) copula models with generalized autoregressive conditional heteroskedasticity (GARCH) margins modeled by Gaussian-mixture distributions. The Bayesian estimation approach includes Markov chain Monte Carlo and variational Bayes with data augmentation. Although R-vines typically involve computationally [...] Read more.
This study proposes Bayesian estimation of multivariate regular vine (R-vine) copula models with generalized autoregressive conditional heteroskedasticity (GARCH) margins modeled by Gaussian-mixture distributions. The Bayesian estimation approach includes Markov chain Monte Carlo and variational Bayes with data augmentation. Although R-vines typically involve computationally intensive procedures limiting their practical use, we address this challenge through parallel computing techniques. To demonstrate our approach, we employ thirteen bivariate copula families within an R-vine pair-copula construction, applied to a large number of marginal distributions. The margins are modeled as exponential-type GARCH processes with intertemporal capital asset pricing specifications, using a mixture of Gaussian and generalized Pareto distributions. Results from an empirical study involving 100 financial returns confirm the effectiveness of our approach. Full article
(This article belongs to the Special Issue Contemporary Bayesian Analysis: Methods and Applications)
Show Figures

Figure 1

21 pages, 2339 KB  
Article
Flood Frequency Analysis and Trend Detection in the Brisbane River Basin, Australia
by S M Anwar Hossain, Sadia T. Mim, Mohammad A. Alim and Ataur Rahman
Water 2025, 17(18), 2690; https://doi.org/10.3390/w17182690 - 11 Sep 2025
Cited by 1 | Viewed by 1180
Abstract
This study presents a comprehensive flood frequency analysis for Australia’s Brisbane River basin using annual maximum flood (AMF) data from 26 stream gauging stations. This evaluates five different probability distributions in fitting the AMF data of the selected stations, which are the Lognormal, [...] Read more.
This study presents a comprehensive flood frequency analysis for Australia’s Brisbane River basin using annual maximum flood (AMF) data from 26 stream gauging stations. This evaluates five different probability distributions in fitting the AMF data of the selected stations, which are the Lognormal, Log Pearson Type III (LP3), Gumbel, Generalized Extreme Value (GEV), and Generalized Pareto (GP) distributions (the recommended distributions in FLIKE software (School of Civil Engineering, University of Newcastle Australia, Australia, Release_x86_5.0.306.0). Three different goodness-of-fit tests (Chi-Squared, Anderson–Darling, and Kolmogorov–Smirnov) are adopted. This study also examines trends in the observed AMF data using several trend tests. It is found that the LP3 is the best-fit probability distribution at majority of the selected stations, followed by the GP distribution. Although the AMF data at most of the stations show an increasing linear trend, these trends are generally statistically non-significant. Full article
Show Figures

Figure 1

14 pages, 2428 KB  
Article
Machine Learning Models for Pancreatic Cancer Survival Prediction: A Multi-Model Analysis Across Stages and Treatments Using the Surveillance, Epidemiology, and End Results (SEER) Database
by Aditya Chakraborty and Mohan D. Pant
J. Clin. Med. 2025, 14(13), 4686; https://doi.org/10.3390/jcm14134686 - 2 Jul 2025
Cited by 2 | Viewed by 2064
Abstract
Background: Pancreatic cancer is among the most lethal malignancies, with poor prognosis and limited survival despite treatment advances. Accurate survival modeling is critical for prognostication and clinical decision-making. This study had three primary aims: (1) to determine the best-fitting survival distribution among patients [...] Read more.
Background: Pancreatic cancer is among the most lethal malignancies, with poor prognosis and limited survival despite treatment advances. Accurate survival modeling is critical for prognostication and clinical decision-making. This study had three primary aims: (1) to determine the best-fitting survival distribution among patients diagnosed and deceased from pancreatic cancer across stages and treatment types; (2) to construct and compare predictive risk classification models; and (3) to evaluate survival probabilities using parametric, semi-parametric, non-parametric, machine learning, and deep learning methods for Stage IV patients receiving both chemotherapy and radiation. Methods: Using data from the SEER database, parametric models (Generalized Extreme Value, Generalized Pareto, Log-Pearson 3), semi-parametric (Cox), and non-parametric (Kaplan–Meier) methods were compared with four machine learning models (gradient boosting, neural network, elastic net, and random forest). Survival probability heatmaps were constructed, and six classification models were developed for risk stratification. ROC curves, accuracy, and goodness-of-fit tests were used for model validation. Statistical tests included Kruskal–Wallis, pairwise Wilcoxon, and chi-square. Results: Generalized Extreme Value (GEV) was found to be the best-fitting distribution in most of the scenarios. Stage-specific survival differences were statistically significant. The highest predictive accuracy (AUC: 0.947; accuracy: 56.8%) was observed in patients receiving both chemotherapy and radiation. The gradient boosting model predicted the most optimistic survival, while random forest showed a sharp decline after 15 months. Conclusions: This study emphasizes the importance of selecting appropriate analytical models for survival prediction and risk classification. Adopting these innovations, with the help of advanced machine learning and deep learning models, can enhance patient outcomes and advance precision medicine initiatives. Full article
(This article belongs to the Section Epidemiology & Public Health)
Show Figures

Figure 1

26 pages, 2742 KB  
Article
Power Dispatch Stability Technology Based on Multi-Energy Complementary Alliances
by Yiming Zhao, Chengjun Zhang, Changsheng Wan, Dong Du, Jing Huang and Weite Li
Mathematics 2025, 13(13), 2091; https://doi.org/10.3390/math13132091 - 25 Jun 2025
Cited by 1 | Viewed by 756
Abstract
In the context of growing global energy demand and increasingly severe environmental pollution, ensuring the stable dispatch of new energy sources and the effective management of power resources has become particularly important. This study focuses on the reliability and stability issues of new [...] Read more.
In the context of growing global energy demand and increasingly severe environmental pollution, ensuring the stable dispatch of new energy sources and the effective management of power resources has become particularly important. This study focuses on the reliability and stability issues of new energy dispatch considering the complementary advantages of multiple energy types. It aims to enhance dispatch stability and energy utilization through an innovative Distributed Overlapping Coalition Formation (DOCF) model. A distributed algorithm utilizing tabu search is proposed to solve the complex optimization problem in power resource allocation. The overlapping coalitions consider synergies between different types of resources and intelligently allocate based on the heterogeneous demands of power loads and the supply capabilities of power stations. Simulation results demonstrate that DOCF can significantly improve power grid resource utilization efficiency and dispatch stability. Particularly in handling intermittent power resources such as solar and wind energy, the proposed model effectively reduces peak shaving time and improves the overall network energy efficiency. Compared with the preference relationship based on selfish and Pareto sequence, the PGG-TS algorithm based on BMBT has an average utility of 10.2% and 25.3% in terms of load, respectively. The methodology and findings of this study have important theoretical and practical value for guiding actual energy management practices and promoting the wider utilization of renewable energy. Full article
(This article belongs to the Special Issue Artificial Intelligence and Game Theory)
Show Figures

Figure 1

23 pages, 1136 KB  
Article
Objective Framework for Bayesian Inference in Multicomponent Pareto Stress–Strength Model Under an Adaptive Progressive Type-II Censoring Scheme
by Young Eun Jeon, Yongku Kim and Jung-In Seo
Mathematics 2025, 13(9), 1379; https://doi.org/10.3390/math13091379 - 23 Apr 2025
Cited by 1 | Viewed by 706
Abstract
This study introduces an objective Bayesian approach for estimating the reliability of a multicomponent stress–strength model based on the Pareto distribution under an adaptive progressive Type-II censoring scheme. The proposed method is developed within a Bayesian framework, utilizing a reference prior with partial [...] Read more.
This study introduces an objective Bayesian approach for estimating the reliability of a multicomponent stress–strength model based on the Pareto distribution under an adaptive progressive Type-II censoring scheme. The proposed method is developed within a Bayesian framework, utilizing a reference prior with partial information to improve the accuracy of point estimation and to ensure the construction of a credible interval for uncertainty assessment. This approach is particularly useful for addressing several limitations of a widely used likelihood-based approach in estimating the multicomponent stress–strength reliability under the Pareto distribution. For instance, in the likelihood-based method, the asymptotic variance–covariance matrix may not exist due to certain constraints. This limitation hinders the construction of an approximate confidence interval for assessing the uncertainty. Moreover, even when an approximate confidence interval is obtained, it may fail to achieve nominal coverage levels in small sample scenarios. Unlike the likelihood-based method, the proposed method provides an efficient estimator across various criteria and constructs a valid credible interval, even with small sample sizes. Extensive simulation studies confirm that the proposed method yields reliable and accurate inference across various censoring scenarios, and a real data application validates its practical utility. These results demonstrate that the proposed method is an effective alternative to the likelihood-based method for reliability inference in the multicomponent stress–strength model based on the Pareto distribution under an adaptive progressive Type-II censoring scheme. Full article
Show Figures

Figure 1

26 pages, 4000 KB  
Article
Collaborative Optimization of Shore Power and Berth Allocation Based on Economic, Environmental, and Operational Efficiency
by Zhiqiang Zhang, Yuhua Zhu, Jian Zhu, Daozheng Huang, Chuanzhong Yin and Jinyang Li
J. Mar. Sci. Eng. 2025, 13(4), 776; https://doi.org/10.3390/jmse13040776 - 14 Apr 2025
Cited by 12 | Viewed by 3961
Abstract
When vessels are docked at ports, traditional auxiliary engines produce substantial pollutants and noise, exerting pressure on the port environment. Shore power technology, as a green, energy-efficient, and emission-reducing solution, can effectively mitigate ship emissions. However, its widespread adoption is hindered by challenges [...] Read more.
When vessels are docked at ports, traditional auxiliary engines produce substantial pollutants and noise, exerting pressure on the port environment. Shore power technology, as a green, energy-efficient, and emission-reducing solution, can effectively mitigate ship emissions. However, its widespread adoption is hindered by challenges such as high costs, compatibility issues, and connection complexity. This study develops a multi-objective optimization model for the coordinated allocation of shore power and berth scheduling, integrating economic benefits, environmental benefits, and operational efficiency. The NSGA-III algorithm is employed to solve the model and generate a Pareto-optimal solution set, with the final optimal solution identified using the TOPSIS method. The results demonstrate that the optimized shore power distribution and berth scheduling strategy can significantly reduce ship emissions and port operating costs while enhancing overall port resource utilization efficiency. Additionally, an economically feasible shore power allocation scheme, based on 80% of berth capacity, is proposed. By accounting for variations in ship types, this study provides more targeted and practical optimization strategies. These findings offer valuable decision support for port management and contribute to the intelligent and sustainable development of green ports. Full article
(This article belongs to the Section Marine Environmental Science)
Show Figures

Graphical abstract

27 pages, 11614 KB  
Article
Multi-Objective Optimization for Resource Allocation in Space–Air–Ground Network with Diverse IoT Devices
by Yongnan Xu, Xiangrong Tang, Linyu Huang, Hamid Ullah and Qian Ning
Sensors 2025, 25(1), 274; https://doi.org/10.3390/s25010274 - 6 Jan 2025
Cited by 7 | Viewed by 2594
Abstract
As the Internet of Things (IoT) expands globally, the challenge of signal transmission in remote regions without traditional communication infrastructure becomes prominent. An effective solution involves integrating aerial, terrestrial, and space components to form a Space–Air–Ground Integrated Network (SAGIN). This paper discusses an [...] Read more.
As the Internet of Things (IoT) expands globally, the challenge of signal transmission in remote regions without traditional communication infrastructure becomes prominent. An effective solution involves integrating aerial, terrestrial, and space components to form a Space–Air–Ground Integrated Network (SAGIN). This paper discusses an uplink signal scenario in which various types of data collection sensors as IoT devices use Unmanned Aerial Vehicles (UAVs) as relays to forward signals to low-Earth-orbit satellites. Considering the fairness of resource allocation among IoT devices of the same category, our goal is to maximize the minimum uplink channel capacity for each category of IoT devices, which is a multi-objective optimization problem. Specifically, the variables include the deployment locations of UAVs, bandwidth allocation ratios, and the association between UAVs and IoT devices. To address this problem, we propose a multi-objective evolutionary algorithm that ensures fair resource distribution among multiple parties. The algorithm is validated in eight different scenario settings and compared with various traditional multi-objective optimization algorithms. The experimental results demonstrate that the proposed algorithm can achieve higher-quality Pareto fronts (PFs) and better convergence, indicating more equitable resource allocation and improved algorithmic effectiveness in addressing this issue. Moreover, these pre-prepared, high-quality solutions from PFs provide adaptability to varying requirements in signal collection scenarios. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

26 pages, 621 KB  
Article
A Bivariate Extension of Type-II Generalized Crack Distribution for Modeling Heavy-Tailed Losses
by Taehan Bae and Hanson Quarshie
Mathematics 2024, 12(23), 3718; https://doi.org/10.3390/math12233718 - 27 Nov 2024
Viewed by 1019
Abstract
As an extension of the (univariate) Birnbaum–Saunders distribution, the Type-II generalized crack (GCR2) distribution, built on an appropriate base density, provides a sufficient level of flexibility to fit various distributional shapes, including heavy-tailed ones. In this paper, we develop a bivariate extension of [...] Read more.
As an extension of the (univariate) Birnbaum–Saunders distribution, the Type-II generalized crack (GCR2) distribution, built on an appropriate base density, provides a sufficient level of flexibility to fit various distributional shapes, including heavy-tailed ones. In this paper, we develop a bivariate extension of the Type-II generalized crack distribution and study its dependency structure. For practical applications, three specific distributions, GCR2-Generalized Gaussian, GCR2-Student’s t, and GCR2-Logistic, are considered for marginals. The expectation-maximization algorithm is implemented to estimate the parameters in the bivariate GCR2 models. The model fitting results on a catastrophic loss dataset show that the bivariate GCR2 distribution based on the generalized Gaussian density fits the data significantly better than other alternative models, such as the bivariate lognormal distribution and some Archimedean copula models with lognormal or Pareto marginals. Full article
(This article belongs to the Special Issue Actuarial Statistical Modeling and Applications)
Show Figures

Figure 1

Back to TopTop