Next Issue
Volume 12, June-1
Previous Issue
Volume 12, May-1
 
 

Mathematics, Volume 12, Issue 10 (May-2 2024) – 192 articles

Cover Story (view full-size image): The present paper introduces a powerful method for developing lightweight structures and enhancing the load-bearing capacity of common structures. The method, referred to as the ‘method of aggregation’, has been derived from the reverse engineering of sub-additive and super-additive algebraic inequalities. The essence of the proposed method is the consolidation of multiple elements loaded in bending into a reduced number of elements with larger cross-sections but a smaller total volume of material. This procedure results in a large reduction in material usage. This paper also demonstrates that consolidating multiple elements loaded in bending into a reduced number of elements with larger cross-sections but the same total volume of material leads to a large increase in the load-bearing capacity of the structure. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 2736 KiB  
Article
An Adaptive Cubature Kalman Filter Based on Resampling-Free Sigma-Point Update Framework and Improved Empirical Mode Decomposition for INS/CNS Navigation
by Yu Ma and Di Liu
Mathematics 2024, 12(10), 1607; https://doi.org/10.3390/math12101607 - 20 May 2024
Viewed by 303
Abstract
For the degradation of the filtering performance of the INS/CNS navigation system under measurement noise uncertainty, an adaptive cubature Kalman filter (CKF) is proposed based on improved empirical mode decomposition (EMD) and a resampling-free sigma-point update framework (RSUF). The proposed algorithm innovatively integrates [...] Read more.
For the degradation of the filtering performance of the INS/CNS navigation system under measurement noise uncertainty, an adaptive cubature Kalman filter (CKF) is proposed based on improved empirical mode decomposition (EMD) and a resampling-free sigma-point update framework (RSUF). The proposed algorithm innovatively integrates improved EMD and RSUF into CKF to estimate measurement noise covariance in real-time. Specifically, the improved EMD is used to reconstruct measurement noise, and the exponential decay weighting method is introduced to emphasize the use of new measurement noise while gradually discarding older data to estimate the measurement noise covariance. The estimated measurement noise covariance is then imported into RSUF to directly construct the posterior cubature points without a resampling step. Since the measurement noise covariance is updated in real-time and the prediction cubature points error is directly transformed to the posterior cubature points error, the proposed algorithm is less sensitive to the measurement noise uncertainty. The proposed algorithm is verified by simulations conducted on the INS/CNS-integrated navigation system. The experimental results indicate that the proposed algorithm achieves better performance for attitude angle. Full article
Show Figures

Figure 1

20 pages, 2121 KiB  
Article
Data-Proximal Complementary 1-TV Reconstruction for Limited Data Computed Tomography
by Simon Göppel, Jürgen Frikel and Markus Haltmeier
Mathematics 2024, 12(10), 1606; https://doi.org/10.3390/math12101606 - 20 May 2024
Viewed by 313
Abstract
In a number of tomographic applications, data cannot be fully acquired, resulting in severely underdetermined image reconstruction. Conventional methods in such cases lead to reconstructions with significant artifacts. To overcome these artifacts, regularization methods are applied that incorporate additional information. An important example [...] Read more.
In a number of tomographic applications, data cannot be fully acquired, resulting in severely underdetermined image reconstruction. Conventional methods in such cases lead to reconstructions with significant artifacts. To overcome these artifacts, regularization methods are applied that incorporate additional information. An important example is TV reconstruction, which is known to be efficient in compensating for missing data and reducing reconstruction artifacts. On the other hand, tomographic data are also contaminated by noise, which poses an additional challenge. The use of a single regularizer must therefore account for both the missing data and the noise. A particular regularizer may not be ideal for both tasks. For example, the TV regularizer is a poor choice for noise reduction over multiple scales, in which case 1 curvelet regularization methods are well suited. To address this issue, in this paper, we present a novel variational regularization framework that combines the advantages of different regularizers. The basic idea of our framework is to perform reconstruction in two stages. The first stage is mainly aimed at accurate reconstruction in the presence of noise, and the second stage is aimed at artifact reduction. Both reconstruction stages are connected by a data proximity condition. The proposed method is implemented and tested for limited-view CT using a combined curvelet–TV approach. We define and implement a curvelet transform adapted to the limited-view problem and illustrate the advantages of our approach in numerical experiments. Full article
(This article belongs to the Section Computational and Applied Mathematics)
Show Figures

Figure 1

16 pages, 2830 KiB  
Article
A Study on Optimizing the Maximal Product in Cubic Fuzzy Graphs for Multifaceted Applications
by Annamalai Meenakshi, Obel Mythreyi, Robert Čep and Krishnasamy Karthik
Mathematics 2024, 12(10), 1605; https://doi.org/10.3390/math12101605 - 20 May 2024
Viewed by 359
Abstract
Graphs in the field of science and technology make considerable use of theoretical concepts. When dealing with numerous links and circumstances in which there are varying degrees of ambiguity or robustness in the connections between aspects, rather than purely binary interactions, cubic fuzzy [...] Read more.
Graphs in the field of science and technology make considerable use of theoretical concepts. When dealing with numerous links and circumstances in which there are varying degrees of ambiguity or robustness in the connections between aspects, rather than purely binary interactions, cubic fuzzy graphs (CFGs) are more adaptable and compatible than fuzzy graphs. To better represent the complexity of interactions or linkages in the real world, an emerging CFG can be very helpful in achieving better problem-solving abilities that specialize in domains like network analysis, the social sciences, information retrieval, and decision support systems. This idea can be used for a variety of uncertainty-related issues and assist decision-makers in selecting the best course of action through the use of a CFG. Enhancing the maximized network of three cubic fuzzy graphs’ decision-making efficiency was the ultimate objective of this study. We introduced the maximal product of three cubic fuzzy graphs to investigate how interval-valued fuzzy membership, fuzzy membership, and the miscellany of relations are all simultaneously supported through the aspect of degree and total degree of a vertex. Furthermore, the domination on the maximal product of three CFGs was illustrated to analyze the minimum domination number of the weighted CFG, and the proposed approach is illustrated with applications. Full article
Show Figures

Figure 1

26 pages, 1149 KiB  
Article
A Developed Model and Fuzzy Multi-Criteria Decision-Making Method to Evaluate Supply Chain Nervousness Strategies
by Ghazi M. Magableh, Mahmoud Z. Mistarihi, Taha Rababah, Ali Almajwal and Numan Al-Rayyan
Mathematics 2024, 12(10), 1604; https://doi.org/10.3390/math12101604 - 20 May 2024
Viewed by 400
Abstract
Nervousness is thought to be a source of confusion, instability, or uncertainty in SC systems due to disruptions and frequent changes in decisions. Nervousness persists even with consistent SCs, which arise from planning flexibility in response to changes, where responsiveness and customer satisfaction [...] Read more.
Nervousness is thought to be a source of confusion, instability, or uncertainty in SC systems due to disruptions and frequent changes in decisions. Nervousness persists even with consistent SCs, which arise from planning flexibility in response to changes, where responsiveness and customer satisfaction balance. Although the term “nervousness” is well known, to our knowledge no prior research has examined and explored supply chain nervousness strategies (SCNSs). This research explores supply chain nervousness strategies, factors, reduction methods, and recent trends in the supply chain’s relationship with nervousness. The main purpose of this research is to determine the comprehensive and relevant nervousness strategies in the supply chains, especially in light of the unprecedented development and change in business, economics, and technology and the fierce competition. SCN strategies are introduced in a developed model to designate SCN measurements and indicators, mitigation strategies and stages, and management strategies. The fuzzy PROMETHEE method is employed to rank the strategies based on their importance and order of implementation. The suggested method for managing nervousness is then presented with a numerical case, along with the results. The research outcomes indicate that the top five strategies for managing nervousness include planning continuity, utilizing technology, managing nervousness, improving the SC cyber system, and managing supplies. The findings assist decision makers, practitioners, and managers in focusing on SC improvement, resilience, and sustainability. Full article
(This article belongs to the Section Fuzzy Sets, Systems and Decision Making)
Show Figures

Figure 1

33 pages, 6557 KiB  
Article
A Sustainable Supply Chain Model with a Setup Cost Reduction Policy for Imperfect Items under Learning in a Cloudy Fuzzy Environment
by Basim S. O. Alsaedi
Mathematics 2024, 12(10), 1603; https://doi.org/10.3390/math12101603 - 20 May 2024
Viewed by 344
Abstract
The present paper deals with an integrated sustainable supply chain model with the effect of learning for an imperfect production system under a cloudy fuzzy environment where the demand rate is treated as a cloudy triangular fuzzy (imprecise) number, which means that the [...] Read more.
The present paper deals with an integrated sustainable supply chain model with the effect of learning for an imperfect production system under a cloudy fuzzy environment where the demand rate is treated as a cloudy triangular fuzzy (imprecise) number, which means that the demand rate of the items is not constant, and shortages and a warranty policy are allowed. The vendor governs the manufacturing process to serve the demand of the buyer. When the vendor supplies the demanded lot after the production of items, it is also considered that the delivery lots have some defective items that follow an S-shape learning curve. After receiving the lot, the buyer inspects the whole lot, and the buyer classifies the whole lot into two categories: one is the defective-quality items and the other is the imperfect-quality items. The buyer returns the defective-quality items to the seller after a screening process, for which a warranty cost is included. During the transportation of the items, a lot of carbon units are emitted from the transportation, damaging the quality of the environment. The seller includes carbon emission costs to achieve sustainability as per considerations. A one-time discrete investment is also included for the minimizing of the setup cost of the seller for the next cycles. We developed models for the scenario of the separate decision and for the integrated decision of the players (seller/buyer) under the model’s consideration. Our aim is to jointly optimize the integrated total fuzzy cost under a cloudy fuzzy environment sustained by the seller and buyer. Numerical examples, sensitivity, analysis limitations, future scope and conclusions have been provided for the justification of the proposed model, and the impact of the input parameters on the decision variables and integrated total fuzzy cost for the supply chain are provided for the validity and robustness of this proposed model. The effect of learning in a cloudy fuzzy environment was positive for this proposed model. Full article
Show Figures

Figure 1

22 pages, 4917 KiB  
Article
Height Prediction of Water-Conducting Fracture Zone in Jurassic Coalfield of Ordos Basin Based on Improved Radial Movement Optimization Algorithm Back-Propagation Neural Network
by Zhiyong Gao, Liangxing Jin, Pingting Liu and Junjie Wei
Mathematics 2024, 12(10), 1602; https://doi.org/10.3390/math12101602 - 20 May 2024
Viewed by 349
Abstract
The development height of the water-conducting fracture zone (WCFZ) is crucial for the safe production of coal mines. The back-propagation neural network (BP-NN) can be utilized to forecast the WCFZ height, aiding coal mines in water hazard prevention and control efforts. However, the [...] Read more.
The development height of the water-conducting fracture zone (WCFZ) is crucial for the safe production of coal mines. The back-propagation neural network (BP-NN) can be utilized to forecast the WCFZ height, aiding coal mines in water hazard prevention and control efforts. However, the stochastic generation of initial weights and thresholds in BP-NN usually leads to local optima, which might reduce the prediction accuracy. This study thus invokes the excellent global optimization capability of the Improved Radial Movement Optimization (IRMO) algorithm to optimize BP-NN. The influences of mining thickness, coal seam depth, working width, and hard rock lithology proportion coefficient on the height of WCFZ are investigated through 75 groups of in situ data of WCFZ heights measured in the Jurassic coalfield of the Ordos Basin. Consequently, an IRMO-BP-NN model for predicting WCFZ height in the Jurassic coalfield of the Ordos Basin was constructed. The proposed IRMO-BP-NN model was validated through monitoring data from the 4−2216 working faces of Jianbei Coal Mine, followed by a comparative analysis with empirical formulas and conventional BP-NN models. The relative error of the IRMO-BP-NN prediction model is 4.93%, outperforming both the BP-NN prediction model, the SVR prediction model, and empirical formulas. The results demonstrate that the IRMO-BP-NN model enhances the accuracy of predicting WCFZ height, providing an application foundation for predicting such heights in the Jurassic coalfield of the Ordos Basin and protecting the ecological environment of Ordos Basin mining areas. Full article
Show Figures

Figure 1

26 pages, 14457 KiB  
Article
FedUB: Federated Learning Algorithm Based on Update Bias
by Hesheng Zhang, Ping Zhang, Mingkai Hu, Muhua Liu and Jiechang Wang
Mathematics 2024, 12(10), 1601; https://doi.org/10.3390/math12101601 - 20 May 2024
Viewed by 366
Abstract
Federated learning, as a distributed machine learning framework, aims to protect data privacy while addressing the issue of data silos by collaboratively training models across multiple clients. However, a significant challenge to federated learning arises from the non-independent and identically distributed (non-iid) nature [...] Read more.
Federated learning, as a distributed machine learning framework, aims to protect data privacy while addressing the issue of data silos by collaboratively training models across multiple clients. However, a significant challenge to federated learning arises from the non-independent and identically distributed (non-iid) nature of data across different clients. non-iid data can lead to inconsistencies between the minimal loss experienced by individual clients and the global loss observed after the central server aggregates the local models, affecting the model’s convergence speed and generalization capability. To address this challenge, we propose a novel federated learning algorithm based on update bias (FedUB). Unlike traditional federated learning approaches such as FedAvg and FedProx, which independently update model parameters on each client before direct aggregation to form a global model, the FedUB algorithm incorporates an update bias in the loss function of local models—specifically, the difference between each round’s local model updates and the global model updates. This design aims to reduce discrepancies between local and global updates, thus aligning the parameters of locally updated models more closely with those of the globally aggregated model, thereby mitigating the fundamental conflict between local and global optima. Additionally, during the aggregation phase at the server side, we introduce a metric called the bias metric, which assesses the similarity between each client’s local model and the global model. This metric adaptively sets the weight of each client during aggregation after each training round to achieve a better global model. Extensive experiments conducted on multiple datasets have confirmed the effectiveness of the FedUB algorithm. The results indicate that FedUB generally outperforms methods such as FedDC, FedDyn, and Scaffold, especially in scenarios involving partial client participation and non-iid data distributions. It demonstrates superior performance and faster convergence in tasks such as image classification. Full article
(This article belongs to the Special Issue Federated Learning Strategies for Machine Learning)
Show Figures

Figure 1

31 pages, 1113 KiB  
Article
Assembly Theory of Binary Messages
by Szymon Łukaszyk and Wawrzyniec Bieniawski
Mathematics 2024, 12(10), 1600; https://doi.org/10.3390/math12101600 - 20 May 2024
Viewed by 441
Abstract
Using assembly theory, we investigate the assembly pathways of binary strings (bitstrings) of length N formed by joining bits present in the assembly pool and the bitstrings that entered the pool as a result of previous joining operations. We show that the bitstring [...] Read more.
Using assembly theory, we investigate the assembly pathways of binary strings (bitstrings) of length N formed by joining bits present in the assembly pool and the bitstrings that entered the pool as a result of previous joining operations. We show that the bitstring assembly index is bounded from below by the shortest addition chain for N, and we conjecture about the form of the upper bound. We define the degree of causation for the minimum assembly index and show that, for certain N values, it has regularities that can be used to determine the length of the shortest addition chain for N. We show that a bitstring with the smallest assembly index for N can be assembled via a binary program of a length equal to this index if the length of this bitstring is expressible as a product of Fibonacci numbers. Knowing that the problem of determining the assembly index is at least NP-complete, we conjecture that this problem is NP-complete, while the problem of creating the bitstring so that it would have a predetermined largest assembly index is NP-hard. The proof of this conjecture would imply P ≠ NP since every computable problem and every computable solution can be encoded as a finite bitstring. The lower bound on the bitstring assembly index implies a creative path and an optimization path of the evolution of information, where only the latter is available to Turing machines (artificial intelligence). Furthermore, the upper bound hints at the role of dissipative structures and collective, in particular human, intelligence in this evolution. Full article
(This article belongs to the Section Mathematical Physics)
Show Figures

Figure 1

18 pages, 385 KiB  
Article
New and Efficient Estimators of Reliability Characteristics for a Family of Lifetime Distributions under Progressive Censoring
by Syed Ejaz Ahmed, Reza Arabi Belaghi, Abdulkadir Hussein and Alireza Safariyan
Mathematics 2024, 12(10), 1599; https://doi.org/10.3390/math12101599 - 20 May 2024
Viewed by 331
Abstract
Estimation of reliability and stress–strength parameters is important in the manufacturing industry. In this paper, we develop shrinkage-type estimators for the reliability and stress–strength parameters based on progressively censored data from a rich class of distributions. These new estimators improve the performance of [...] Read more.
Estimation of reliability and stress–strength parameters is important in the manufacturing industry. In this paper, we develop shrinkage-type estimators for the reliability and stress–strength parameters based on progressively censored data from a rich class of distributions. These new estimators improve the performance of the commonly used Maximum Likelihood Estimators (MLEs) by reducing their mean squared errors. We provide analytical asymptotic and bootstrap confidence intervals for the targeted parameters. Through a detailed simulation study, we demonstrate that the new estimators have better performance than the MLEs. Finally, we illustrate the application of the new methods to two industrial data sets, showcasing their practical relevance and effectiveness. Full article
(This article belongs to the Special Issue Reliability Estimation and Mathematical Statistics)
Show Figures

Figure 1

41 pages, 2685 KiB  
Article
Navigating Supply Chain Resilience: A Hybrid Approach to Agri-Food Supplier Selection
by Pasura Aungkulanon, Walailak Atthirawong, Pongchanun Luangpaiboon and Wirachchaya Chanpuypetch
Mathematics 2024, 12(10), 1598; https://doi.org/10.3390/math12101598 - 20 May 2024
Viewed by 368
Abstract
Globalization and multinational commerce have increased the dynamism and complexity of supply networks, thereby increasing their susceptibility to disruptions along interconnected supply chains. This study aims to tackle the significant concern of supplier selection disruptions in the Thai agri-food industry as a response [...] Read more.
Globalization and multinational commerce have increased the dynamism and complexity of supply networks, thereby increasing their susceptibility to disruptions along interconnected supply chains. This study aims to tackle the significant concern of supplier selection disruptions in the Thai agri-food industry as a response to the aforementioned challenges. A novel supplier evaluation system, PROMETHEE II, is suggested; it combines the Fuzzy Analytical Hierarchy Process (FAHP) with inferential statistical techniques. This investigation commences with the identification of critical indicators of risk in the sustainable supply chain via three phases of analysis and 315 surveys of management teams. Exploratory factor analysis (EFA) is utilized to ascertain six supply risk criteria and twenty-three sub-criteria. Following this, the parameters are prioritized by FAHP, whereas four prospective suppliers for an agricultural firm are assessed by PROMETHEE II. By integrating optimization techniques into sensitivity analysis, this hybrid approach improves supplier selection criteria by identifying dependable solutions that are customized to risk scenarios and business objectives. The iterative strategy enhances the resilience of the agri-food supply chain by enabling well-informed decision-making amidst evolving market dynamics and chain risks. In addition, this research helps agricultural and other sectors by providing a systematic approach to selecting low-risk suppliers and delineating critical supply chain risk factors. By bridging complexity and facilitating informed decision-making in supplier selection processes, the results of this study fill a significant void in the academic literature concerning sustainable supply chain risk management. Full article
Show Figures

Figure 1

19 pages, 723 KiB  
Article
Secure Active Intelligent Reflecting Surface Communication against Colluding Eavesdroppers
by Jiaxin Xu, Yuyang Peng, Runlong Ye, Wei Gan, Fawaz AL-Hazemi and Mohammad Meraj Mirza
Mathematics 2024, 12(10), 1597; https://doi.org/10.3390/math12101597 - 20 May 2024
Viewed by 299
Abstract
An active intelligent reflecting surface (IRS)-assisted, secure, multiple-input–single-output communication method is proposed in this paper. In this proposed scheme, a practical and unfavorable propagation environment is considered by assuming that multiple colluding eavesdroppers (Eves) coexist. In this case, we jointly optimize the beamformers [...] Read more.
An active intelligent reflecting surface (IRS)-assisted, secure, multiple-input–single-output communication method is proposed in this paper. In this proposed scheme, a practical and unfavorable propagation environment is considered by assuming that multiple colluding eavesdroppers (Eves) coexist. In this case, we jointly optimize the beamformers of the base station (BS) and the active IRS for the formulated sum secrecy rate (SSR) maximization problem. Because the formulated problem is not convex, we apply the alternating optimization method to optimize the beamformers for maximizing the SSR. Specifically, we use the semi-definite relaxation method to solve the sub-problem of the beamforming vector of the BS, and we use the successive convex approximation method to solve the sub-problem of the power amplification matrix of the active IRS. Based on the solutions obtained using these stated methods, numerical results show that deploying an active IRS is superior compared to the cases of a passive IRS and a non-IRS for improving the physical layer security of wireless communication with multiple colluding Eves under different settings, such as the numbers of users, Eves, reflecting elements, and BS antennas as well as the maximum transmit power budget at the BS. Full article
Show Figures

Figure 1

19 pages, 2968 KiB  
Article
Teaching–Learning-Based Optimization Algorithm with Stochastic Crossover Self-Learning and Blended Learning Model and Its Application
by Yindi Ma, Yanhai Li and Longquan Yong
Mathematics 2024, 12(10), 1596; https://doi.org/10.3390/math12101596 - 20 May 2024
Viewed by 358
Abstract
This paper presents a novel variant of the teaching–learning-based optimization algorithm, termed BLTLBO, which draws inspiration from the blended learning model, specifically designed to tackle high-dimensional multimodal complex optimization problems. Firstly, the perturbation conditions in the “teaching” and “learning” stages of the original [...] Read more.
This paper presents a novel variant of the teaching–learning-based optimization algorithm, termed BLTLBO, which draws inspiration from the blended learning model, specifically designed to tackle high-dimensional multimodal complex optimization problems. Firstly, the perturbation conditions in the “teaching” and “learning” stages of the original TLBO algorithm are interpreted geometrically, based on which the search capability of the TLBO is enhanced by adjusting the range of values of random numbers. Second, a strategic restructuring has been ingeniously implemented, dividing the algorithm into three distinct phases: pre-course self-study, classroom blended learning, and post-course consolidation; this structural reorganization and the random crossover strategy in the self-learning phase effectively enhance the global optimization capability of TLBO. To evaluate its performance, the BLTLBO algorithm was tested alongside seven distinguished variants of the TLBO algorithm on thirteen multimodal functions from the CEC2014 suite. Furthermore, two excellent high-dimensional optimization algorithms were added to the comparison algorithm and tested in high-dimensional mode on five scalable multimodal functions from the CEC2008 suite. The empirical results illustrate the BLTLBO algorithm’s superior efficacy in handling high-dimensional multimodal challenges. Finally, a high-dimensional portfolio optimization problem was successfully addressed using the BLTLBO algorithm, thereby validating the practicality and effectiveness of the proposed method. Full article
Show Figures

Figure 1

16 pages, 590 KiB  
Article
A Mathematical Analysis of Competitive Dynamics and Aggressive Treatment in the Evolution of Drug Resistance in Malaria Parasites
by Tianqi Song, Yishi Wang, Yang Li and Guoliang Fan
Mathematics 2024, 12(10), 1595; https://doi.org/10.3390/math12101595 - 20 May 2024
Viewed by 347
Abstract
Experimental evidence supports the counterintuitive notion that rapid eradication of pathogens within a host, infected with both drug-sensitive and -resistant malaria parasites, can actually accelerate the evolution of drug-resistant pathogens. This study aims to analyze the competitive dynamics between these two strains through [...] Read more.
Experimental evidence supports the counterintuitive notion that rapid eradication of pathogens within a host, infected with both drug-sensitive and -resistant malaria parasites, can actually accelerate the evolution of drug-resistant pathogens. This study aims to analyze the competitive dynamics between these two strains through a mathematical model and evaluate the impact of aggressive treatment on the spread of drug resistance. We conducted equilibrium, uncertainty, and sensitivity analyses to assess the model, identifying and measuring the influence of key factors on the outcome variable (the population of drug-resistant parasites). Both equilibrium and local sensitivity analyses concurred that the density of drug-resistant parasites is notably affected by genetic instability, the production rate of red blood cells, the number of merozoites, and competition factors. Conversely, there is a negative relationship between genetic instability and one of the competition coefficients. Global sensitivity analysis offers a comprehensive examination of the impact of each input parameter on the temporal propagation of drug resistance, effectively accounting for the interplay among parameters. Both local and global sensitivity analyses underscore the continuous impact of drug treatment on the progression of drug resistance over time. This paper anticipates exploring the underlying mechanisms of drug resistance and providing theoretical support for developing more effective drug treatment strategies. Full article
Show Figures

Figure 1

15 pages, 659 KiB  
Article
A Class of Bi-Univalent Functions in a Leaf-Like Domain Defined through Subordination via q̧-Calculus
by Abdullah Alsoboh and Georgia Irina Oros
Mathematics 2024, 12(10), 1594; https://doi.org/10.3390/math12101594 - 20 May 2024
Viewed by 325
Abstract
Bi-univalent functions associated with the leaf-like domain within open unit disks are investigated, and a new subclass is introduced and studied in the research presented here. This is achieved by applying the subordination principle for analytic functions in conjunction with q-calculus. The [...] Read more.
Bi-univalent functions associated with the leaf-like domain within open unit disks are investigated, and a new subclass is introduced and studied in the research presented here. This is achieved by applying the subordination principle for analytic functions in conjunction with q-calculus. The class is proved to not be empty. By proving its existence, generalizations can be given to other sets of functions. In addition, coefficient bounds are examined with a particular focus on |α2| and |α3| coefficients, and Fekete–Szegö inequalities are estimated for the functions in this new class. To support the conclusions, previous works are cited for confirmation. Full article
Show Figures

Figure 1

19 pages, 637 KiB  
Article
Global Dynamics of a Social Hierarchy-Stratified Malaria Model: Insight from Fractional Calculus
by Sulaimon F. Abimbade, Furaha M. Chuma, Sunday O. Sangoniyi, Ramoshweu S. Lebelo, Kazeem O. Okosun and Samson Olaniyi
Mathematics 2024, 12(10), 1593; https://doi.org/10.3390/math12101593 - 20 May 2024
Viewed by 363
Abstract
In this study, a mathematical model for the transmission dynamics of malaria among different socioeconomic groups in the human population interacting with a susceptible-infectious vector population is presented and analysed using a fractional-order derivative of the Caputo type. The total human population is [...] Read more.
In this study, a mathematical model for the transmission dynamics of malaria among different socioeconomic groups in the human population interacting with a susceptible-infectious vector population is presented and analysed using a fractional-order derivative of the Caputo type. The total human population is stratified into two distinguished classes of lower and higher income individuals, with each class further subdivided into susceptible, infectious, and recovered populations. The socio hierachy-structured fractional-order malaria model is analyzed through the application of different dynamical system tools. The theory of positivity and boundedness based on the generalized mean value theorem is employed to investigate the basic properties of solutions of the model, while the Banach fixed point theory approach is used to prove the existence and uniqueness of the solution. Furthermore, unlike the existing related studies, comprehensive global asymptotic dynamics of the fractional-order malaria model around both disease-free and endemic equilibria are explored by generalizing the usual classical methods for establishing global asymptotic stability of the steady states. The asymptotic behavior of the trajectories of the system are graphically illustrated at different values of the fractional (noninteger) order. Full article
Show Figures

Figure 1

18 pages, 11943 KiB  
Article
Efficient Image Details Preservation of Image Processing Pipeline Based on Two-Stage Tone Mapping
by Weijian Xu, Yuyang Cai, Feng Qian, Yuan Hu and Jingwen Yan
Mathematics 2024, 12(10), 1592; https://doi.org/10.3390/math12101592 - 20 May 2024
Viewed by 360
Abstract
Converting a camera’s RAW image to an RGB format for human perception involves utilizing an imaging pipeline, and a series of processing modules. Existing modules often result in varying degrees of original information loss, which can render the reverse imaging pipeline unable to [...] Read more.
Converting a camera’s RAW image to an RGB format for human perception involves utilizing an imaging pipeline, and a series of processing modules. Existing modules often result in varying degrees of original information loss, which can render the reverse imaging pipeline unable to recover the original RAW image information. To this end, this paper proposes a new, almost reversible image imaging pipeline. Thus, RGB images and RAW images can be effectively converted between each other. Considering the impact of original information loss, this paper introduces a two-stage tone mapping operation (TMO). In the first stage, the RAW image with a linear response is transformed into an RGB color image. In the second stage, color scale mapping corrects the dynamic range of the image suitable for human perception through linear stretching, and reduces the loss of sensitive information to the human eye during the integer process. effectively preserving the original image’s dynamic information. The DCRAW imaging pipeline addresses the problem of high light overflow by directly highlighting cuts. The proposed imaging pipeline constructs an independent highlight processing module, and preserves the highlighted information of the image. The experimental results demonstrate that the two-stage tone mapping operation embedded in the imaging processing pipeline provided in this article ensures that the image output is suitable for human visual system (HVS) perception and retains more original image information. Full article
Show Figures

Figure 1

18 pages, 5811 KiB  
Article
Uncertainty Analysis of Aircraft Center of Gravity Deviation and Passenger Seat Allocation Optimization
by Xiangling Zhao and Wenheng Xiao
Mathematics 2024, 12(10), 1591; https://doi.org/10.3390/math12101591 - 20 May 2024
Viewed by 326
Abstract
The traditional method of allocating passenger seats based on compartments does not effectively manage an aircraft’s center of gravity (CG), resulting in a notable divergence from the desired target CG (TCG). In this work, the Boeing B737-800 aircraft was employed as a case [...] Read more.
The traditional method of allocating passenger seats based on compartments does not effectively manage an aircraft’s center of gravity (CG), resulting in a notable divergence from the desired target CG (TCG). In this work, the Boeing B737-800 aircraft was employed as a case study, and row-based and compartment-based integer programming models for passenger allocation were examined and constructed with the aim of addressing the current situation. The accuracy of CG control was evaluated by comparing the row-based and compartment-based allocation techniques, taking into account different bodyweights and numbers of passengers. The key contribution of this research is to broaden the range of the mobilizable set for the aviation weight and balance (AWB) model, resulting in a significant reduction in the range of deviations in the center of gravity outcomes by a factor of around 6 to 16. The effectiveness of the row-based allocation approach and the impact of passenger weight randomness on the deviation of an airplane’s CG were also investigated in this study. The Monte Carlo method was utilized to quantify the uncertainty associated with passenger weight, resulting in the generation of the posterior distribution of the aircraft’s center of gravity (CG) deviation. The outcome of the row-based model test is the determination of the range of passenger numbers that can be effectively allocated under different TCG conditions. Full article
(This article belongs to the Section Engineering Mathematics)
Show Figures

Figure 1

31 pages, 2841 KiB  
Article
Performance Evaluation of Railway Infrastructure Managers: A Novel Hybrid Fuzzy MCDM Model
by Aida Kalem, Snežana Tadić, Mladen Krstić, Nermin Čabrić and Nedžad Branković
Mathematics 2024, 12(10), 1590; https://doi.org/10.3390/math12101590 - 19 May 2024
Viewed by 743
Abstract
Modern challenges such as the liberalization of the railway sector and growing demands for sustainability, high-quality services, and user satisfaction set new standards in railway operations. In this context, railway infrastructure managers (RIMs) play a crucial role in ensuring innovative approaches that will [...] Read more.
Modern challenges such as the liberalization of the railway sector and growing demands for sustainability, high-quality services, and user satisfaction set new standards in railway operations. In this context, railway infrastructure managers (RIMs) play a crucial role in ensuring innovative approaches that will strengthen the position of railways in the market by enhancing efficiency and competitiveness. Evaluating their performance is essential for assessing the achieved objectives, and it is conducted through a wide range of key performance indicators (KPIs), which encompass various dimensions of operations. Monitoring and analyzing KPIs are crucial for improving service quality, achieving sustainability, and establishing a foundation for research and development of new strategies in the railway sector. This paper provides a detailed overview and evaluation of KPIs for RIMs. This paper creates a framework for RIM evaluation using various scientific methods, from identifying KPIs to applying complex analysis methods. A novel hybrid model, which integrates the fuzzy Delphi method for aggregating expert opinions on the KPIs’ importance, the extended fuzzy analytic hierarchy process (AHP) method for determining the relative weights of these KPIs, and the ADAM method for ranking RIMs, has been developed in this paper. This approach enables a detailed analysis and comparison of RIMs and their performances, providing the basis for informed decision-making and the development of new strategies within the railway sector. The analysis results provide insight into the current state of railway infrastructure and encourage further efforts to improve the railway sector by identifying key areas for enhancement. The main contributions of the research include a detailed overview of KPIs for RIMs and the development of a hybrid multi-criteria decision making (MCDM) model. The hybrid model represents a significant step in RIM performance analysis, providing a basis for future research in this area. The model is universal and, as such, represents a valuable contribution to MCDM theory. Full article
(This article belongs to the Special Issue Multi-criteria Optimization Models and Methods for Smart Cities)
Show Figures

Figure 1

22 pages, 8339 KiB  
Article
Micro-Grinding Parameter Control of Hard and Brittle Materials Based on Kinematic Analysis of Material Removal
by Hisham Manea, Hong Lu, Qi Liu, Junbiao Xiao and Kefan Yang
Mathematics 2024, 12(10), 1589; https://doi.org/10.3390/math12101589 - 19 May 2024
Viewed by 351
Abstract
This article explores the intricacies of micro-grinding parameter control for hard and brittle materials, with a specific focus on Zirconia ceramics (ZrO2) and Optical Glass (BK7). Given the increasing demand and application of these materials in various high-precision industries, this study [...] Read more.
This article explores the intricacies of micro-grinding parameter control for hard and brittle materials, with a specific focus on Zirconia ceramics (ZrO2) and Optical Glass (BK7). Given the increasing demand and application of these materials in various high-precision industries, this study aims to provide a comprehensive kinematic analysis of material removal during the micro-grinding process. According to the grinding parameters selected to be analyzed in this study, the ac-max values are between (9.55 nm ~ 67.58 nm). Theoretical modeling of the grinding force considering the brittle and ductile removal phase, frictional effects, the possibility of grit to cut materials, and grinding conditions is very important in order to control and optimize the surface grinding process. This research introduces novel models for predicting and optimizing micro-grinding forces effectively. The primary objective is to establish a micro-grinding force model that facilitates the easy manipulation of micro-grinding parameters, thereby optimizing the machining process for these challenging materials. Through experimental investigations conducted on Zirconia ceramics, the paper evaluates a mathematical model of the grinding force, highlighting its significance in predicting and controlling the forces involved in micro-grinding. The suggested model underwent thorough testing to assess its validity, revealing an accuracy with average variances of 6.616% for the normal force and 5.752% for the tangential force. Additionally, the study delves into the coefficient of friction within the grinding process, suggesting a novel frictional force model. This model is assessed through a series of experiments on Optical Glass BK7, aiming to accurately characterize the frictional forces at play during grinding. The empirical results obtained from both sets of experiments—on Zirconia ceramics and Optical Glass BK7—substantiate the efficacy of the proposed models. These findings confirm the models’ capability to accurately describe the force dynamics in the micro-grinding of hard and brittle materials. The research not only contributes to the theoretical understanding of micro-grinding processes but also offers practical insights for enhancing the efficiency and effectiveness of machining operations involving hard and brittle materials. Full article
Show Figures

Figure 1

8 pages, 259 KiB  
Article
Persistence and Stochastic Extinction in a Lotka–Volterra Predator–Prey Stochastically Perturbed Model
by Leonid Shaikhet and Andrei Korobeinikov
Mathematics 2024, 12(10), 1588; https://doi.org/10.3390/math12101588 - 19 May 2024
Viewed by 312
Abstract
The classical Lotka–Volterra predator–prey model is globally stable and uniformly persistent. However, in real-life biosystems, the extinction of species due to stochastic effects is possible and may occur if the magnitudes of the stochastic effects are large enough. In this paper, we consider [...] Read more.
The classical Lotka–Volterra predator–prey model is globally stable and uniformly persistent. However, in real-life biosystems, the extinction of species due to stochastic effects is possible and may occur if the magnitudes of the stochastic effects are large enough. In this paper, we consider the classical Lotka–Volterra predator–prey model under stochastic perturbations. For this model, using an analytical technique based on the direct Lyapunov method and a development of the ideas of R.Z. Khasminskii, we find the precise sufficient conditions for the stochastic extinction of one and both species and, thus, the precise necessary conditions for the stochastic system’s persistence. The stochastic extinction occurs via a process known as the stabilization by noise of the Khasminskii type. Therefore, in order to establish the sufficient conditions for extinction, we found the conditions for this stabilization. The analytical results are illustrated by numerical simulations. Full article
(This article belongs to the Special Issue Stochastic Models in Mathematical Biology, 2nd Edition)
Show Figures

Figure 1

18 pages, 611 KiB  
Article
Reliability Assessment of Bridge Structure Using Bilal Distribution
by Ahmed T. Ramadan, Osama Abdulaziz Alamri and Ahlam H. Tolba
Mathematics 2024, 12(10), 1587; https://doi.org/10.3390/math12101587 - 19 May 2024
Viewed by 264
Abstract
Reliability assessments are pivotal in evaluating system quality and have found extensive application in manufacturing. This research delves into a system comprising five components, one of which is a bridge network. The components are presumed to follow a Bilal lifetime distribution with a [...] Read more.
Reliability assessments are pivotal in evaluating system quality and have found extensive application in manufacturing. This research delves into a system comprising five components, one of which is a bridge network. The components are presumed to follow a Bilal lifetime distribution with a failure rate that changes over time. Four distinct methods are employed to enhance the components within the system. This study involves the computation of δ-fractiles and reliability equivalence factors (REFs). Additionally, a numerical case study is provided to elucidate the theoretical findings. Full article
(This article belongs to the Section Engineering Mathematics)
Show Figures

Figure 1

13 pages, 6907 KiB  
Article
Inverse Scheme to Locally Determine Nonlinear Magnetic Material Properties: Numerical Case Study
by Manfred Kaltenbacher, Andreas Gschwentner, Barbara Kaltenbacher, Stefan Ulbrich and Alice Reinbacher-Köstinger
Mathematics 2024, 12(10), 1586; https://doi.org/10.3390/math12101586 - 19 May 2024
Viewed by 280
Abstract
We are interested in the determination of the local nonlinear magnetic material behaviour in electrical steel sheets due to cutting and punching effects. For this purpose, the inverse problem has to be solved, where the objective function, which penalises the difference between the [...] Read more.
We are interested in the determination of the local nonlinear magnetic material behaviour in electrical steel sheets due to cutting and punching effects. For this purpose, the inverse problem has to be solved, where the objective function, which penalises the difference between the measured and the simulated magnetic flux density, has to be minimised under a constraint defined according to the corresponding partial differential equation model. We use the adjoint method to efficiently obtain the gradients of the objective function with respect to the material parameters. The optimisation algorithm is low-memory Broyden–Fletcher–Goldfarb–Shanno (BFGS), the forward and adjoint formulations are solved using the finite element (FE) method and the ill-posedness is handled via Tikhonov regularisation, in combination with the discrepancy principle. Realistic numerical case studies show promising results. Full article
(This article belongs to the Special Issue Numerical Optimization for Electromagnetic Problems)
Show Figures

Figure 1

19 pages, 7906 KiB  
Article
Abundant New Optical Soliton Solutions to the Biswas–Milovic Equation with Sensitivity Analysis for Optimization
by Md Nur Hossain, Faisal Alsharif, M. Mamun Miah and Mohammad Kanan
Mathematics 2024, 12(10), 1585; https://doi.org/10.3390/math12101585 - 19 May 2024
Viewed by 443
Abstract
This study extensively explores the Biswas–Milovic equation (BME) with Kerr and power law nonlinearity to extract the unique characteristics of optical soliton solutions. These optical soliton solutions have different applications in the field of precision in optical switching, applications in waveguide design, exploration [...] Read more.
This study extensively explores the Biswas–Milovic equation (BME) with Kerr and power law nonlinearity to extract the unique characteristics of optical soliton solutions. These optical soliton solutions have different applications in the field of precision in optical switching, applications in waveguide design, exploration of nonlinear optical effects, imaging precision, reduced intensity fluctuations, suitability for optical signal processing in optical physics, etc. Through the powerful (G/G, 1/G)-expansion analytical method, a variety of soliton solutions are expressed in three distinct forms: trigonometric, hyperbolic, and rational expressions. Rigorous validation using Mathematica software ensures precision, while dynamic visual representations vividly portray various soliton patterns such as kink, anti-kink, singular soliton, hyperbolic, dark soliton, and periodic bright soliton solutions. Indeed, a sensitivity analysis was conducted to assess how changes in parameters affect the exact solutions, aiding in the understanding of system behavior and informing decision-making, especially in accurately designing or analyzing real-world optical phenomena. This investigation reveals the significant influence of parameters λ, τ, c, B, and Κ on the precise solutions in Kerr and power law nonlinearities within the BME. Notably, parameter λ exhibits consistently high sensitivity across all scenarios, while parameters τ and c demonstrate pronounced sensitivity in scenario III. The outcomes derived from this method are distinctive and carry significant implications for the dynamics of optical fibers and wave phenomena across various optical systems. Full article
(This article belongs to the Special Issue Exact Solutions and Numerical Solutions of Differential Equations)
Show Figures

Figure 1

28 pages, 17158 KiB  
Article
An Inverse Problem for Estimating Spatially and Temporally Dependent Surface Heat Flux with Thermography Techniques
by Cheng-Hung Huang and Kuan-Chieh Fang
Mathematics 2024, 12(10), 1584; https://doi.org/10.3390/math12101584 - 19 May 2024
Viewed by 274
Abstract
In this study, an inverse conjugate heat transfer problem is examined to estimate temporally and spatially the dependent unknown surface heat flux using thermography techniques with a thermal camera in a three-dimensional domain. Thermography techniques encompass a broad set of methods and procedures [...] Read more.
In this study, an inverse conjugate heat transfer problem is examined to estimate temporally and spatially the dependent unknown surface heat flux using thermography techniques with a thermal camera in a three-dimensional domain. Thermography techniques encompass a broad set of methods and procedures used for capturing and analyzing thermal data, while thermal cameras are specific tools used within those techniques to capture thermal images. In the present study, the interface conditions of the plate and air domains are obtained using perfect thermal contact conditions, and therefore we define the problem studied as an inverse conjugate heat transfer problem. Achieving the simultaneous solution of the continuity, Navier–Stokes, and energy equations within the air domain, alongside the heat conduction equation in the plate domain, presents a more intricate challenge compared to conventional inverse heat conduction problems. The validity of our inverse solutions was verified through numerical simulations, considering various inlet air velocities and plate thicknesses. Notably, it was found that due to the singularity of the gradient of the cost function at the final time point, the estimated results near the final time must be discarded, and exact measurements consistently produce accurate boundary heat fluxes under thin-plate conditions, with air velocity exhibiting no significant impact on the estimates. Additionally, an analysis of measurement errors and their influence on the inverse solutions was conducted. The numerical results conclusively demonstrated that the maximum error when estimating heat flux consistently remained below 3% and higher measurement noise resulted in the accuracy of the heat flux estimation decreasing. This underscores the inherent challenges associated with inverse problems and highlights the importance of obtaining accurate measurement data in the problem domain. Full article
(This article belongs to the Special Issue Computational and Analytical Methods for Inverse Problems)
Show Figures

Figure 1

18 pages, 294 KiB  
Article
A New Approach of Complex Fuzzy Ideals in BCK/BCI-Algebras
by Manivannan Balamurugan, Thukkaraman Ramesh, Anas Al-Masarwah and Kholood Alsager
Mathematics 2024, 12(10), 1583; https://doi.org/10.3390/math12101583 - 18 May 2024
Viewed by 369
Abstract
The concept of complex fuzzy sets, where the unit disk of the complex plane acts as the codomain of the membership function, as an extension of fuzzy sets. The objective of this article is to use complex fuzzy sets in BCK/BCI-algebras. We present [...] Read more.
The concept of complex fuzzy sets, where the unit disk of the complex plane acts as the codomain of the membership function, as an extension of fuzzy sets. The objective of this article is to use complex fuzzy sets in BCK/BCI-algebras. We present the concept of a complex fuzzy subalgebra in a BCK/BCI-algebra and explore their properties. Furthermore, we discuss the modal and level operators of these complex fuzzy subalgebras, highlighting their importance in BCK/BCI-algebras. We study various operations, and the laws of a complex fuzzy system, including union, intersection, complement, simple differences, and bounded differences of complex fuzzy ideals within BCK/BCI-algebras. Finally, we generate a computer programming algorithm that implements our complex fuzzy subalgebras/ideals in BCK/BCI-algebras procedure for ease of lengthy calculations. Full article
(This article belongs to the Special Issue Advanced Methods in Fuzzy Control and Their Applications)
19 pages, 2235 KiB  
Article
Consumer Default Risk Portrait: An Intelligent Management Framework of Online Consumer Credit Default Risk
by Miao Zhu, Ben-Chang Shia, Meng Su and Jialin Liu
Mathematics 2024, 12(10), 1582; https://doi.org/10.3390/math12101582 - 18 May 2024
Viewed by 424
Abstract
Online consumer credit services play a vital role in the contemporary consumer market. To foster their sustainable development, it is essential to establish and strengthen the relevant risk management mechanism. This study proposes an intelligent management framework called the consumer default risk portrait [...] Read more.
Online consumer credit services play a vital role in the contemporary consumer market. To foster their sustainable development, it is essential to establish and strengthen the relevant risk management mechanism. This study proposes an intelligent management framework called the consumer default risk portrait (CDRP) to mitigate the default risks associated with online consumer loans. The CDRP framework combines traditional credit information and Internet platform data to depict the portrait of consumer default risks. It consists of four modules: addressing data imbalances, establishing relationships between user characteristics and the default risk, analyzing the influence of different variables on default, and ultimately presenting personalized consumer profiles. Empirical findings reveal that “Repayment Periods”, “Loan Amount”, and “Debt to Income Type” emerge as the three variables with the most significant impact on default. “Re-payment Periods” and “Debt to Income Type” demonstrate a positive correlation with default probability, while a lower “Loan Amount” corresponds to a higher likelihood of default. Additionally, our verification highlights that the significance of variables varies across different samples, thereby presenting a personalized portrait from a single sample. In conclusion, the proposed framework provides valuable suggestions and insights for financial institutions and Internet platform managers to improve the market environment of online consumer credit services. Full article
Show Figures

Figure 1

16 pages, 530 KiB  
Article
A Negative Sample-Free Graph Contrastive Learning Algorithm
by Dongming Chen, Mingshuo Nie, Zhen Wang, Huilin Chen and Dongqi Wang
Mathematics 2024, 12(10), 1581; https://doi.org/10.3390/math12101581 - 18 May 2024
Viewed by 305
Abstract
Self-supervised learning is a new machine learning method that does not rely on manually labeled data, and learns from rich unlabeled data itself by designing agent tasks using the input data as supervision to obtain a more generalized representation for application in downstream [...] Read more.
Self-supervised learning is a new machine learning method that does not rely on manually labeled data, and learns from rich unlabeled data itself by designing agent tasks using the input data as supervision to obtain a more generalized representation for application in downstream tasks. However, the current self-supervised learning suffers from the problem of relying on the selection and number of negative samples and the problem of sample bias phenomenon after graph data augmentation. In this paper, we investigate the above problems and propose a corresponding solution, proposing a graph contrastive learning algorithm without negative samples. The model uses matrix sketching in the implicit space for feature augmentation to reduce sample bias and iteratively trains the mutual correlation matrix of two viewpoints by drawing closer to the distance of the constant matrix as the objective function. This method does not require techniques such as negative samples, gradient stopping, and momentum updating to prevent self-supervised model collapse. This method is compared with 10 graph representation learning algorithms on four datasets for node classification tasks, and the experimental results show that the algorithm proposed in this paper achieves good results. Full article
(This article belongs to the Special Issue Complex Network Modeling in Artificial Intelligence Applications)
Show Figures

Figure 1

14 pages, 10679 KiB  
Article
A Stochastic Semi-Parametric SEIR Model with Infectivity in an Incubation Period
by Jing Zhang and Tong Jin
Mathematics 2024, 12(10), 1580; https://doi.org/10.3390/math12101580 - 18 May 2024
Viewed by 287
Abstract
This paper introduces stochastic disturbances into a semi-parametric SEIR model with infectivity in an incubation period. The model combines the randomness of disease transmission and the nonlinearity of transmission rate, providing a flexible framework for more accurate description of the process of infectious [...] Read more.
This paper introduces stochastic disturbances into a semi-parametric SEIR model with infectivity in an incubation period. The model combines the randomness of disease transmission and the nonlinearity of transmission rate, providing a flexible framework for more accurate description of the process of infectious disease transmission. On the basis of the discussion of the deterministic model, the stochastic semi-parametric SEIR model is studied. Firstly, we use Lyapunov analysis to prove the existence and uniqueness of global positive solutions for the model. Secondly, the conditions for disease extinction are established, and appropriate stochastic Lyapunov functions are constructed to discuss the asymptotic behavior of the model’s solution at the disease-free equilibrium point of the deterministic model. Finally, the specific transmission functions are enumerated, and the accuracy of the results are demonstrated through numerical simulations. Full article
Show Figures

Figure 1

27 pages, 2753 KiB  
Article
Strategic Queueing Behavior of Two Groups of Patients in a Healthcare System
by Youxin Liu, Liwei Liu, Tao Jiang and Xudong Chai
Mathematics 2024, 12(10), 1579; https://doi.org/10.3390/math12101579 - 18 May 2024
Viewed by 283
Abstract
Long waiting times and crowded services are the current medical situation in China. Especially in hierarchic healthcare systems, as high-quality medical resources are mainly concentrated in comprehensive hospitals, patients are too concentrated in these hospitals, which leads to overcrowding. This paper constructs a [...] Read more.
Long waiting times and crowded services are the current medical situation in China. Especially in hierarchic healthcare systems, as high-quality medical resources are mainly concentrated in comprehensive hospitals, patients are too concentrated in these hospitals, which leads to overcrowding. This paper constructs a game-theoretical queueing model to analyze the strategic queueing behavior of patients. In such hospitals, patients are divided into first-visit and referred patients, and the hospitals provide patients with two service phases of “diagnosis” and “treatment”. We first obtain the expected sojourn time. By defining the patience level of patients, the queueing behavior of patients in equilibrium is studied. The results suggest that as long as the patients with low patience levels join the queue, the patients with high patience levels also join the queue. As more patients arrive at the hospitals, the queueing behavior of patients with high patience levels may have a negative effect on that of patients with low patience levels. The numerical results also show that the equilibrium behavior deviates from a socially optimal solution; therefore, to reach maximal social welfare, the social planner should adopt some regulatory policies to control the arrival rates of patients. Full article
(This article belongs to the Special Issue Queueing Systems Models and Their Applications)
Show Figures

Figure 1

21 pages, 2247 KiB  
Article
The Lomax-Exponentiated Odds Ratio–G Distribution and Its Applications
by Sudakshina Singha Roy, Hannah Knehr, Declan McGurk, Xinyu Chen, Achraf Cohen and Shusen Pu
Mathematics 2024, 12(10), 1578; https://doi.org/10.3390/math12101578 - 18 May 2024
Viewed by 324
Abstract
This paper introduces the Lomax-exponentiated odds ratio–G (L-EOR–G) distribution, a novel framework designed to adeptly navigate the complexities of modern datasets. It blends theoretical rigor with practical application to surpass the limitations of traditional models in capturing complex data attributes such as heavy [...] Read more.
This paper introduces the Lomax-exponentiated odds ratio–G (L-EOR–G) distribution, a novel framework designed to adeptly navigate the complexities of modern datasets. It blends theoretical rigor with practical application to surpass the limitations of traditional models in capturing complex data attributes such as heavy tails, shaped curves, and multimodality. Through a comprehensive examination of its theoretical foundations and empirical data analysis, this study lays down a systematic theoretical framework by detailing its statistical properties and validates the distribution’s efficacy and robustness in parameter estimation via Monte Carlo simulations. Empirical evidence from real-world datasets further demonstrates the distribution’s superior modeling capabilities, supported by compelling various goodness-of-fit tests. The convergence of theoretical precision and practical utility heralds the L-EOR–G distribution as a groundbreaking advancement in statistical modeling, significantly enhancing precision and adaptability. The new model not only addresses a critical need within statistical modeling but also opens avenues for future research, including the development of more sophisticated estimation methods and the adaptation of the model for various data types, thereby promising to refine statistical analysis and interpretation across a wide array of disciplines. Full article
(This article belongs to the Special Issue New Advances in Applied Probability and Stochastic Processes)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop