Previous Issue
Volume 12, September-2
 
 

Mathematics, Volume 12, Issue 19 (October-1 2024) – 54 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 1696 KiB  
Article
Electric Vehicle Motor Fault Detection with Improved Recurrent 1D Convolutional Neural Network
by Prashant Kumar, Prince, Ashish Kumar Sinha and Heung Soo Kim
Mathematics 2024, 12(19), 3012; https://doi.org/10.3390/math12193012 (registering DOI) - 26 Sep 2024
Abstract
The reliability of electric vehicles (EVs) is crucial for the performance and safety of modern transportation systems. Electric motors are the driving force in EVs, and their maintenance is critical for efficient EV performance. The conventional fault detection methods for motors often struggle [...] Read more.
The reliability of electric vehicles (EVs) is crucial for the performance and safety of modern transportation systems. Electric motors are the driving force in EVs, and their maintenance is critical for efficient EV performance. The conventional fault detection methods for motors often struggle with accurately capturing complex spatiotemporal vibration patterns. This paper proposes a recurrent convolutional neural network (RCNN) for effective defect detection in motors, taking advantage of the advances in deep learning techniques. The proposed approach applies long short-term memory (LSTM) layers to capture the temporal dynamics essential for fault detection and convolutional neural network layers to mine local features from the segmented vibration data. This hybrid method helps the model to learn complicated representations and correlations within the data, leading to improved fault detection. Model development and testing are conducted using a sizable dataset that includes various kinds of motor defects under differing operational scenarios. The results demonstrate that, in terms of fault detection accuracy, the proposed RCNN-based strategy performs better than the traditional fault detection techniques. The performance of the model is assessed under varying vibration data noise levels to further guarantee its effectiveness in practical applications. Full article
(This article belongs to the Special Issue Dynamic Modeling and Simulation for Control Systems, 3rd Edition)
18 pages, 1329 KiB  
Article
Strain-Rate and Stress-Rate Models of Nonlinear Viscoelastic Materials
by Claudio Giorgi and Angelo Morro
Mathematics 2024, 12(19), 3011; https://doi.org/10.3390/math12193011 (registering DOI) - 26 Sep 2024
Abstract
The paper is devoted to the modeling of nonlinear viscoelastic materials. The constitutive equations are considered in differential form via relations between strain, stress, and their derivatives in the Lagrangian description. The thermodynamic consistency is established by using the Clausius–Duhem inequality through a [...] Read more.
The paper is devoted to the modeling of nonlinear viscoelastic materials. The constitutive equations are considered in differential form via relations between strain, stress, and their derivatives in the Lagrangian description. The thermodynamic consistency is established by using the Clausius–Duhem inequality through a procedure that involves two uncommon features. Firstly, the entropy production is regarded as a positive-valued constitutive function per se. This view implies that the inequality is in fact an equation. Secondly, this statement of the second law is investigated by using an algebraic representation formula, thus arriving at quite general results for rate terms that are usually overlooked in thermodynamic analyses. Starting from strain-rate or stress-rate equations, the corresponding finite equations are derived. It then emerges that a greater generality of the constitutive equations of the classical models, such as those of Boltzmann and Maxwell, are obtained as special cases. Full article
(This article belongs to the Special Issue Computational Mechanics and Applied Mathematics)
20 pages, 532 KiB  
Article
Inverse Probability-Weighted Estimation for Dynamic Structural Equation Model with Missing Data
by Hao Cheng
Mathematics 2024, 12(19), 3010; https://doi.org/10.3390/math12193010 (registering DOI) - 26 Sep 2024
Abstract
In various applications, observed variables are missing some information that was intended to be collected. The estimations of both loading and path coefficients could be biased when ignoring the missing data. Inverse probability weighting (IPW) is one of the well-known methods helping to [...] Read more.
In various applications, observed variables are missing some information that was intended to be collected. The estimations of both loading and path coefficients could be biased when ignoring the missing data. Inverse probability weighting (IPW) is one of the well-known methods helping to reduce bias in regressions, while belonging to a promising but new category in structural equation models. The paper proposes both parametric and nonparametric IPW estimation methods for dynamic structural equation models, in which both loading and path coefficients are developed into functions of a random variable and of the quantile level. To improve the computational efficiency, modified parametric IPW and modified nonparametric IPW are developed through reducing inverse probability computations but making fuller use of completely observed information. All the above IPW estimation methods are compared to existing complete case analysis through simulation investigations. Finally, the paper illustrates the proposed model and estimation methods by an empirical study on digital new-quality productivity. Full article
(This article belongs to the Section Computational and Applied Mathematics)
16 pages, 1052 KiB  
Article
A Fixed-Time Event-Triggered Consensus of a Class of Multi-Agent Systems with Disturbed and Non-Linear Dynamics
by Yueqing Wang, Te Wang and Zhi Li
Mathematics 2024, 12(19), 3009; https://doi.org/10.3390/math12193009 (registering DOI) - 26 Sep 2024
Abstract
This paper investigates the problem of fixed-time event-triggered consensus control for a class of multi-agent systems with disturbed and non-linear dynamics. A fixed-time consensus protocol based on an event-triggered strategy is proposed, which can ensure a fixed-time event-triggered consensus, reduce energy consumption, and [...] Read more.
This paper investigates the problem of fixed-time event-triggered consensus control for a class of multi-agent systems with disturbed and non-linear dynamics. A fixed-time consensus protocol based on an event-triggered strategy is proposed, which can ensure a fixed-time event-triggered consensus, reduce energy consumption, and decrease the frequency of controller updates. The control protocol can also be applied to the case when the systems are free of disturbances; it solves the problem of high convergence time of the systems and reduces energy consumption of the systems. Sufficient conditions are proposed for the multi-agent systems with disturbed and non-linear dynamics to achieve the fixed-time event-triggered consensus by using algebraic graph theory, inequalities, fixed-time stability theory, and Lyapunov stability theory. Finally, simulation results show that the proposed control protocol has the advantages of both event-triggered and fixed-time convergence; compared to previous work, the convergence time of the new control protocol is greatly reduced (about 1.5 s) and the update times are also greatly reduced (less than 50 times), which is consistent with the theoretical results. Full article
(This article belongs to the Special Issue Advance in Control Theory and Optimization)
16 pages, 1062 KiB  
Article
Performance Analysis for Predictive Voltage Stability Monitoring Using Enhanced Adaptive Neuro-Fuzzy Expert System
by Oludamilare Bode Adewuyi and Senthil Krishnamurthy
Mathematics 2024, 12(19), 3008; https://doi.org/10.3390/math12193008 - 26 Sep 2024
Abstract
Intelligent voltage stability monitoring remains an essential feature of modern research into secure operations of power system networks. This research developed an adaptive neuro-fuzzy expert system (ANFIS)-based predictive model to validate the viability of two contemporary voltage stability indices (VSIs) for intelligent voltage [...] Read more.
Intelligent voltage stability monitoring remains an essential feature of modern research into secure operations of power system networks. This research developed an adaptive neuro-fuzzy expert system (ANFIS)-based predictive model to validate the viability of two contemporary voltage stability indices (VSIs) for intelligent voltage stability monitoring, especially at intricate loading and operation points close to voltage collapse. The Novel Line Stability Index (NLSI) and Critical Boundary Index are VSIs deployed extensively for steady-state voltage stability analysis, and thus, they are selected for the predictive model implementation. Six essential power system operational parameters with data values calculated at varying real and reactive loading levels are input features for ANFIS model implementation. The model’s performance is evaluated using reliable statistical error performance analysis in percentages (MAPE and RRMSEp) and regression analysis based on Pearson’s correlation coefficient (R). The IEEE 14-bus and IEEE 118-bus test systems were used to evaluate the prediction model over various network sizes and complexities and at varying clustering radii. The percentage error analysis reveals that the ANFIS predictive model performed well with both VSIs, with CBI performing comparatively better based on the comparative values of MAPE, RRMSEp, and R at multiple simulation runs and clustering radii. Remarkably, CBI showed credible potential as a reliable voltage stability indicator that can be adopted for real-time monitoring, particularly at loading levels near the point of voltage instability. Full article
(This article belongs to the Special Issue Artificial Intelligence Techniques Applications on Power Systems)
20 pages, 829 KiB  
Article
Compound Optimum Designs for Clinical Trials in Personalized Medicine
by Belmiro P. M. Duarte, Anthony C. Atkinson, David Pedrosa and Marlena van Munster
Mathematics 2024, 12(19), 3007; https://doi.org/10.3390/math12193007 - 26 Sep 2024
Abstract
We consider optimal designs for clinical trials when response variance depends on treatment and covariates are included in the response model. These designs are generalizations of Neyman allocation, and commonly employed in personalized medicine where external covariates linearly affect the response. Very often, [...] Read more.
We consider optimal designs for clinical trials when response variance depends on treatment and covariates are included in the response model. These designs are generalizations of Neyman allocation, and commonly employed in personalized medicine where external covariates linearly affect the response. Very often, these designs aim at maximizing the amount of information gathered but fail to assure ethical requirements. We analyze compound optimal designs that maximize a criterion weighting the amount of information and the reward of allocating the patients to the most effective/least risky treatment. We develop a general representation for static (a priori) allocation and propose a semidefinite programming (SDP) formulation to support their numerical computation. This setup is extended assuming the variance and the parameters of the response of all treatments are unknown and an adaptive sequential optimal design scheme is implemented and used for demonstration. Purely information theoretic designs for the same allocation have been addressed elsewhere, and we use them to support the techniques applied to compound designs. Full article
20 pages, 1377 KiB  
Article
TPE-Optimized DNN with Attention Mechanism for Prediction of Tower Crane Payload Moving Conditions
by Muhammad Zeshan Akber, Wai-Kit Chan, Hiu-Hung Lee and Ghazanfar Ali Anwar
Mathematics 2024, 12(19), 3006; https://doi.org/10.3390/math12193006 - 26 Sep 2024
Abstract
Accurately predicting the payload movement and ensuring efficient control during dynamic tower crane operations are crucial for crane safety, including the ability to predict payload mass within a safe or normal range. This research utilizes deep learning to accurately predict the normal and [...] Read more.
Accurately predicting the payload movement and ensuring efficient control during dynamic tower crane operations are crucial for crane safety, including the ability to predict payload mass within a safe or normal range. This research utilizes deep learning to accurately predict the normal and abnormal payload movement of tower cranes. A scaled-down tower crane prototype with a systematic data acquisition system is built to perform experiments and data collection. The data related to 12 test case scenarios are gathered, and each test case represents a specific combination of hoisting and slewing motion and payload mass to counterweight ratio, defining tower crane operational variations. This comprehensive data is investigated using a novel attention-based deep neural network with Tree-Structured Parzen Estimator optimization (TPE-AttDNN). The proposed TPE-AttDNN achieved a prediction accuracy of 0.95 with a false positive rate of 0.08. These results clearly demonstrate the effectiveness of the proposed model in accurately predicting the tower crane payload moving condition. To ensure a more reliable performance assessment of the proposed AttDNN, we carried out ablation experiments that highlighted the significance of the model’s individual components. Full article
16 pages, 2152 KiB  
Article
A Study of GGDP Transition Impact on the Sustainable Development by Mathematical Modelling Investigation
by Nuoya Yue and Junjun Hou
Mathematics 2024, 12(19), 3005; https://doi.org/10.3390/math12193005 - 26 Sep 2024
Abstract
GDP is a common and essential indicator for evaluating a country’s overall economy. However, environmental issues may be overlooked in the pursuit of GDP growth for some countries. It may be beneficial to adopt more sustainable criteria for assessing economic health. In this [...] Read more.
GDP is a common and essential indicator for evaluating a country’s overall economy. However, environmental issues may be overlooked in the pursuit of GDP growth for some countries. It may be beneficial to adopt more sustainable criteria for assessing economic health. In this study, green GDP (GGDP) is discussed using mathematical approaches. Multiple dataset indicators were selected for the evaluation of GGDP and its impact on climate mitigation. The k-means clustering algorithm was utilized to classify 16 countries into three distinct categories for specific analysis. The potential impact of transitioning to GGDP was investigated through changes in a quantitative parameter, the climate impact factor. Ridge regression was applied to predict the impact of switching to GGDP for the three country categories. The consequences of transitioning to GGDP on the quantified improvement of climate indicators were graphically demonstrated over time on a global scale. The entropy weight method (EWM) and TOPSIS were used to obtain the value. Countries in category 2, as divided by k-means clustering, were predicted to show a greater improvement in scores as one of the world’s largest carbon emitters, China, which belongs to category 2 countries, plays a significant role in global climate governance. A specific analysis of China was performed after obtaining the EWM-TOPSIS results. Gray relational analysis and Pearson correlation were carried out to analyze the relationships between specific indicators, followed by a prediction of CO2 emissions based on the analyzed critical indicators. Full article
(This article belongs to the Special Issue Financial Mathematics and Sustainability)
Show Figures

Figure 1

19 pages, 1400 KiB  
Article
Data Imputation in Electricity Consumption Profiles through Shape Modeling with Autoencoders
by Oscar Duarte, Javier E. Duarte and Javier Rosero-Garcia
Mathematics 2024, 12(19), 3004; https://doi.org/10.3390/math12193004 - 26 Sep 2024
Abstract
In this paper, we propose a novel methodology for estimating missing data in energy consumption datasets. Conventional data imputation methods are not suitable for these datasets, because they are time series with special characteristics and because, for some applications, it is quite important [...] Read more.
In this paper, we propose a novel methodology for estimating missing data in energy consumption datasets. Conventional data imputation methods are not suitable for these datasets, because they are time series with special characteristics and because, for some applications, it is quite important to preserve the shape of the daily energy profile. Our answer to this need is the use of autoencoders. First, we split the problem into two subproblems: how to estimate the total amount of daily energy, and how to estimate the shape of the daily energy profile. We encode the shape as a new feature that can be modeled and predicted using autoencoders. In this way, the problem of imputation of profile data are reduced to two relatively simple problems on which conventional methods can be applied. However, the two predictions are related, so special care should be taken when reconstructing the profile. We show that, as a result, our data imputation methodology produces plausible profiles where other methods fail. We tested it on a highly corrupted dataset, outperforming conventional methods by a factor of 3.7. Full article
(This article belongs to the Special Issue Modeling, Simulation, and Analysis of Electrical Power Systems)
Show Figures

Figure 1

19 pages, 48917 KiB  
Article
OCTNet: A Modified Multi-Scale Attention Feature Fusion Network with InceptionV3 for Retinal OCT Image Classification
by Irshad Khalil, Asif Mehmood, Hyunchul Kim and Jungsuk Kim
Mathematics 2024, 12(19), 3003; https://doi.org/10.3390/math12193003 - 26 Sep 2024
Abstract
Classification and identification of eye diseases using Optical Coherence Tomography (OCT) has been a challenging task and a trending research area in recent years. Accurate classification and detection of different diseases are crucial for effective care management and improving vision outcomes. Current detection [...] Read more.
Classification and identification of eye diseases using Optical Coherence Tomography (OCT) has been a challenging task and a trending research area in recent years. Accurate classification and detection of different diseases are crucial for effective care management and improving vision outcomes. Current detection methods fall into two main categories: traditional methods and deep learning-based approaches. Traditional approaches rely on machine learning for feature extraction, while deep learning methods utilize data-driven classification model training. In recent years, Deep Learning (DL) and Machine Learning (ML) algorithms have become essential tools, particularly in medical image classification, and are widely used to classify and identify various diseases. However, due to the high spatial similarities in OCT images, accurate classification remains a challenging task. In this paper, we introduce a novel model called “OCTNet” that integrates a deep learning model combining InceptionV3 with a modified multi-scale attention-based spatial attention block to enhance model performance. OCTNet employs an InceptionV3 backbone with a fusion of dual attention modules to construct the proposed architecture. The InceptionV3 model generates rich features from images, capturing both local and global aspects, which are then enhanced by utilizing the modified multi-scale spatial attention block, resulting in a significantly improved feature map. To evaluate the model’s performance, we utilized two state-of-the-art (SOTA) datasets that include images of normal cases, Choroidal Neovascularization (CNV), Drusen, and Diabetic Macular Edema (DME). Through experimentation and simulation, the proposed OCTNet improves the classification accuracy of the InceptionV3 model by 1.3%, yielding higher accuracy than other SOTA models. We also performed an ablation study to demonstrate the effectiveness of the proposed method. The model achieved an overall average accuracy of 99.50% and 99.65% with two different OCT datasets. Full article
30 pages, 557 KiB  
Article
Impulsive Discrete Runge–Kutta Methods and Impulsive Continuous Runge–Kutta Methods for Nonlinear Differential Equations with Delayed Impulses
by Gui-Lai Zhang, Zhi-Yong Zhu, Yu-Chen Wang and Chao Liu
Mathematics 2024, 12(19), 3002; https://doi.org/10.3390/math12193002 - 26 Sep 2024
Abstract
In this paper, we study the asymptotical stability of the exact solutions of nonlinear impulsive differential equations with the Lipschitz continuous function f(t,x) for the dynamic system and for the impulsive term Lipschitz continuous delayed functions Ik [...] Read more.
In this paper, we study the asymptotical stability of the exact solutions of nonlinear impulsive differential equations with the Lipschitz continuous function f(t,x) for the dynamic system and for the impulsive term Lipschitz continuous delayed functions Ik. In order to obtain numerical methods with a high order of convergence and that are capable of preserving the asymptotical stability of the exact solutions of these equations, impulsive discrete Runge–Kutta methods and impulsive continuous Runge–Kutta methods are constructed, respectively. For these different types of numerical methods, different convergence results are obtained and the sufficient conditions for asymptotical stability of these numerical methods are also obtained, respectively. Finally, some numerical examples are provided to confirm the theoretical results. Full article
(This article belongs to the Special Issue Advances in Computational Mathematics and Applied Mathematics)
11 pages, 258 KiB  
Article
Lebesgue Spaces and Operators with Complex Gaussian Kernels
by E. R. Negrín, B. J. González and Jeetendrasingh Maan
Mathematics 2024, 12(19), 3001; https://doi.org/10.3390/math12193001 - 26 Sep 2024
Abstract
This paper examines the boundedness properties and Parseval-type relations for operators with complex Gaussian kernels over Lebesgue spaces. Furthermore, this study includes an exploration of the Gauss–Weierstrass semigroup, serving as a particular example within the framework of our analysis. Full article
12 pages, 303 KiB  
Article
Series Over Bessel Functions as Series in Terms of Riemann’s Zeta Function
by Slobodan B. Tričković and Miomir S. Stanković
Mathematics 2024, 12(19), 3000; https://doi.org/10.3390/math12193000 - 26 Sep 2024
Abstract
Relying on the Hurwitz formula, we find closed-form formulas for the series over sine and cosine functions through the Hurwitz zeta functions, and using them and another summation formula for trigonometric series, we obtain a finite sum for some series over the Riemann [...] Read more.
Relying on the Hurwitz formula, we find closed-form formulas for the series over sine and cosine functions through the Hurwitz zeta functions, and using them and another summation formula for trigonometric series, we obtain a finite sum for some series over the Riemann zeta functions. We apply these results to the series over Bessel functions, expressing them first as series over the Riemann zeta functions. Full article
12 pages, 275 KiB  
Article
Instability of Standing Waves for INLS with Inverse Square Potential
by Saleh Almuthaybiri and Tarek Saanouni
Mathematics 2024, 12(19), 2999; https://doi.org/10.3390/math12192999 - 26 Sep 2024
Abstract
This work studies an inhomogeneous generalized Hartree equation with inverse square potential. The purpose is to prove the existence and strong instability of inter-critical standing waves. This means that there are infinitely many data near to the ground state, such that the associated [...] Read more.
This work studies an inhomogeneous generalized Hartree equation with inverse square potential. The purpose is to prove the existence and strong instability of inter-critical standing waves. This means that there are infinitely many data near to the ground state, such that the associated solution blows-up in finite time. The proof combines a variational analysis with the standard variance identity. The challenge is to deal with three difficulties: the singular potential |x|2, an inhomogeneous term |x|λ, and a non-local source term. Full article
13 pages, 775 KiB  
Article
Prediction of Turbulent Boundary Layer Flow Dynamics with Transformers
by Rakesh Sarma, Fabian Hübenthal, Eray Inanc and Andreas Lintermann
Mathematics 2024, 12(19), 2998; https://doi.org/10.3390/math12192998 - 26 Sep 2024
Abstract
Time-marching of turbulent flow fields is computationally expensive using traditional Computational Fluid Dynamics (CFD) solvers. Machine Learning (ML) techniques can be used as an acceleration strategy to offload a few time-marching steps of a CFD solver. In this study, the Transformer (TR) architecture, [...] Read more.
Time-marching of turbulent flow fields is computationally expensive using traditional Computational Fluid Dynamics (CFD) solvers. Machine Learning (ML) techniques can be used as an acceleration strategy to offload a few time-marching steps of a CFD solver. In this study, the Transformer (TR) architecture, which has been widely used in the Natural Language Processing (NLP) community for prediction and generative tasks, is utilized to predict future velocity flow fields in an actuated Turbulent Boundary Layer (TBL) flow. A unique data pre-processing step is proposed to reduce the dimensionality of the velocity fields, allowing the processing of full velocity fields of the actuated TBL flow while taking advantage of distributed training in a High Performance Computing (HPC) environment. The trained model is tested at various prediction times using the Dynamic Mode Decomposition (DMD) method. It is found that under five future prediction time steps with the TR, the model is able to achieve a relative Frobenius norm error of less than 5%, compared to fields predicted with a Large Eddy Simulation (LES). Finally, a computational study shows that the TR achieves a significant speed-up, offering computational savings approximately 53 times greater than those of the baseline LES solver. This study demonstrates one of the first applications of TRs on actuated TBL flow intended towards reducing the computational effort of time-marching. The application of this model is envisioned in a coupled manner with the LES solver to provide few time-marching steps, which will accelerate the overall computational process. Full article
(This article belongs to the Special Issue Artificial Intelligence for Fluid Mechanics)
Show Figures

Figure 1

35 pages, 4984 KiB  
Article
Integrating Fuzzy MCDM Methods and ARDL Approach for Circular Economy Strategy Analysis in Romania
by Camelia Delcea, Ionuț Nica, Irina Georgescu, Nora Chiriță and Cristian Ciurea
Mathematics 2024, 12(19), 2997; https://doi.org/10.3390/math12192997 - 26 Sep 2024
Abstract
This study investigates the factors influencing CO2 emissions in Romania from 1990 to 2023 using the Autoregressive Distributed Lag (ARDL) model. Before the ARDL model, we identified a set of six policies that were ranked using Fuzzy Electre, Topsis, DEMATEL, and [...] Read more.
This study investigates the factors influencing CO2 emissions in Romania from 1990 to 2023 using the Autoregressive Distributed Lag (ARDL) model. Before the ARDL model, we identified a set of six policies that were ranked using Fuzzy Electre, Topsis, DEMATEL, and Vikor. The multi-criteria decision-making (MCDM) methods have highlighted the importance of a circular policy on CO2 emission reduction, which should be a central focus for policymakers. The results of the ARDL model indicate that, in the long term, renewable energy production reduces CO2 emissions, showing a negative relationship. Conversely, an increase in patent applications and urbanization contributes to higher CO2 emissions, reflecting a positive impact. In total, five key factors were analyzed: CO2 emissions per capita, patent applications, gross domestic product, share of energy production from renewables, and urbanization. Notably, GDP does not significantly explain CO2 emissions in the long run, suggesting that economic growth alone is not a direct driver of CO2 emission levels in Romania. This decoupling might result from improvements in energy efficiency, shifts towards less carbon-intensive industries, and the increased adoption of renewable energy sources. Romania has implemented effective environmental regulations and policies that mitigate the impact of economic growth on CO2 emissions. Full article
(This article belongs to the Special Issue Fuzzy Logic and Computational Intelligence)
Show Figures

Figure 1

19 pages, 6172 KiB  
Article
MS-UNet: Multi-Scale Nested UNet for Medical Image Segmentation with Few Training Data Based on an ELoss and Adaptive Denoising Method
by Haoyuan Chen, Yufei Han, Linwei Yao, Xin Wu, Kuan Li and Jianping Yin
Mathematics 2024, 12(19), 2996; https://doi.org/10.3390/math12192996 - 26 Sep 2024
Abstract
Traditional U-shape segmentation models can achieve excellent performance with an elegant structure. However, the single-layer decoder structure of U-Net or SwinUnet is too “thin” to exploit enough information, resulting in large semantic differences between the encoder and decoder parts. Things get worse in [...] Read more.
Traditional U-shape segmentation models can achieve excellent performance with an elegant structure. However, the single-layer decoder structure of U-Net or SwinUnet is too “thin” to exploit enough information, resulting in large semantic differences between the encoder and decoder parts. Things get worse in the field of medical image processing, where annotated data are more difficult to obtain than other tasks. Based on this observation, we propose a U-like model named MS-UNet with a plug-and-play adaptive denoising module and ELoss for the medical image segmentation task in this study. Instead of the single-layer U-Net decoder structure used in Swin-UNet and TransUNet, we specifically designed a multi-scale nested decoder based on the Swin Transformer for U-Net. The proposed multi-scale nested decoder structure allows for the feature mapping between the decoder and encoder to be semantically closer, thus enabling the network to learn more detailed features. In addition, ELoss could improve the attention of the model to the segmentation edges, and the plug-and-play adaptive denoising module could prevent the model from learning the wrong features without losing detailed information. The experimental results show that MS-UNet could effectively improve network performance with more efficient feature learning capability and exhibit more advanced performance, especially in the extreme case with a small amount of training data. Furthermore, the proposed ELoss and denoising module not only significantly enhance the segmentation performance of MS-UNet but can also be applied individually to other models. Full article
(This article belongs to the Special Issue Machine-Learning-Based Process and Analysis of Medical Images)
Show Figures

Figure 1

24 pages, 2433 KiB  
Article
Generalized Shortest Path Problem: An Innovative Approach for Non-Additive Problems in Conditional Weighted Graphs
by Adrien Durand, Timothé Watteau, Georges Ghazi and Ruxandra Mihaela Botez
Mathematics 2024, 12(19), 2995; https://doi.org/10.3390/math12192995 - 26 Sep 2024
Abstract
The shortest path problem is fundamental in graph theory and has been studied extensively due to its practical importance. Despite this aspect, finding the shortest path between two nodes remains a significant challenge in many applications, as it often becomes complex and time [...] Read more.
The shortest path problem is fundamental in graph theory and has been studied extensively due to its practical importance. Despite this aspect, finding the shortest path between two nodes remains a significant challenge in many applications, as it often becomes complex and time consuming. This complexity becomes even more challenging when constraints make the problem non-additive, thereby increasing the difficulty of finding the optimal path. The objective of this paper is to present a broad perspective on the conventional shortest path problem. It introduces a new method to classify cost functions associated with graphs by defining distinct sets of cost functions. This classification facilitates the exploration of line graphs and an understanding of the upper bounds on the transformation sizes for these types of graphs. Based on these foundations, the paper proposes a practical methodology for solving non-additive shortest path problems. It also provides a proof of optimality and establishes an upper bound on the algorithmic cost of the proposed methodology. This study not only expands the scope of traditional shortest path problems but also highlights their computational complexity and potential solutions. Full article
(This article belongs to the Special Issue Advances in Graph Theory: Algorithms and Applications)
Show Figures

Figure 1

15 pages, 591 KiB  
Article
Closeness Centrality of Asymmetric Trees and Triangular Numbers
by Nytha Ramanathan, Eduardo Ramirez, Dorothy Suzuki-Burke and Darren A. Narayan
Mathematics 2024, 12(19), 2994; https://doi.org/10.3390/math12192994 - 26 Sep 2024
Abstract
The combinatorial problem in this paper is motivated by a variant of the famous traveling salesman problem where the salesman must return to the starting point after each delivery. The total length of a delivery route is related to a metric known as [...] Read more.
The combinatorial problem in this paper is motivated by a variant of the famous traveling salesman problem where the salesman must return to the starting point after each delivery. The total length of a delivery route is related to a metric known as closeness centrality. The closeness centrality of a vertex v in a graph G was defined in 1950 by Bavelas to be CC(v)=|V(G)|1SD(v), where SD(v) is the sum of the distances from v to each of the other vertices (which is one-half of the total distance in the delivery route). We provide a real-world example involving the Metro Atlanta Rapid Transit Authority rail network and identify stations whose SD values are nearly identical, meaning they have a similar proximity to other stations in the network. We then consider theoretical aspects involving asymmetric trees. For integer values of k, we considered the asymmetric tree with paths of lengths k,2k,,nk that are incident to a center vertex. We investigated trees with different values of k, and for k=1 and k=2, we established necessary and sufficient conditions for the existence of two vertices with identical SD values, which has a surprising connection to the triangular numbers. Additionally, we investigated asymmetric trees with paths incident to two vertices and found a sufficient condition for vertices to have equal SD values. This leads to new combinatorial proofs of identities arising from Pascal’s triangle. Full article
(This article belongs to the Special Issue Graph Theory and Applications, 2nd Edition)
Show Figures

Figure 1

16 pages, 361 KiB  
Article
Perturbation Approach to Polynomial Root Estimation and Expected Maximum Modulus of Zeros with Uniform Perturbations
by Ibrahim A. Nafisah, Sajad A. Sheikh, Mohammed A. Alshahrani, Mohammed M. A. Almazah, Badr Alnssyan and Javid Gani Dar
Mathematics 2024, 12(19), 2993; https://doi.org/10.3390/math12192993 - 26 Sep 2024
Abstract
This paper presents a significant extension of perturbation theory techniques for estimating the roots of polynomials. Building upon foundational results and recent work by Pakdemirli and Yurtsever, as well as taking inspiration from the concept of probabilistic bounds introduced by Sheikh et al., [...] Read more.
This paper presents a significant extension of perturbation theory techniques for estimating the roots of polynomials. Building upon foundational results and recent work by Pakdemirli and Yurtsever, as well as taking inspiration from the concept of probabilistic bounds introduced by Sheikh et al., we develop and prove several novel theorems that address a wide range of polynomial structures. These include polynomials with multiple large coefficients, coefficients of different orders, alternating coefficient orders, large linear and constant terms, and exponentially decreasing coefficients. Among the key contributions is a theorem that establishes an upper bound on the expected maximum modulus of the zeros of polynomials with uniformly distributed perturbations in their coefficients. The theorem considers the case where all but the leading coefficient receive a uniformly and independently distributed perturbation in the interval [1,1]. Our approach provides a comprehensive framework for estimating the order of magnitude of polynomial roots based on the structure and magnitude of their coefficients without the need for explicit root-finding algorithms. The results offer valuable insights into the relationship between coefficient patterns and root behavior, extending the applicability of perturbation-based root estimation to a broader class of polynomials. This work has potential applications in various fields, including random polynomials, control systems design, signal processing, and numerical analysis, where quick and reliable estimation of polynomial roots is crucial. Our findings contribute to the theoretical understanding of polynomial properties and provide practical tools for engineers and scientists dealing with polynomial equations in diverse contexts. Full article
(This article belongs to the Special Issue Complex Analysis and Geometric Function Theory, 2nd Edition)
Show Figures

Figure 1

18 pages, 609 KiB  
Article
Few-Shot Classification Based on Sparse Dictionary Meta-Learning
by Zuo Jiang, Yuan Wang and Yi Tang
Mathematics 2024, 12(19), 2992; https://doi.org/10.3390/math12192992 - 26 Sep 2024
Viewed by 40
Abstract
In the field of Meta-Learning, traditional methods for addressing few-shot learning problems often rely on leveraging prior knowledge for rapid adaptation. However, when faced with insufficient data, meta-learning models frequently encounter challenges such as overfitting and limited feature extraction capabilities. To overcome these [...] Read more.
In the field of Meta-Learning, traditional methods for addressing few-shot learning problems often rely on leveraging prior knowledge for rapid adaptation. However, when faced with insufficient data, meta-learning models frequently encounter challenges such as overfitting and limited feature extraction capabilities. To overcome these challenges, an innovative meta-learning approach based on Sparse Dictionary and Consistency Learning (SDCL) is proposed. The distinctive feature of SDCL is the integration of sparse representation and consistency regularization, designed to acquire both broadly applicable general knowledge and task-specific meta-knowledge. Through sparse dictionary learning, SDCL constructs compact and efficient models, enabling the accurate transfer of knowledge from the source domain to the target domain, thereby enhancing the effectiveness of knowledge transfer. Simultaneously, consistency regularization generates synthetic data similar to existing samples, expanding the training dataset and alleviating data scarcity issues. The core advantage of SDCL lies in its ability to preserve key features while ensuring stronger generalization and robustness. Experimental results demonstrate that the proposed meta-learning algorithm significantly improves model performance under limited training data conditions, particularly excelling in complex cross-domain tasks. On average, the algorithm improves accuracy by 3%. Full article
Show Figures

Figure 1

15 pages, 383 KiB  
Article
A Covariance-Free Strictly Complex-Valued Relevance Vector Machine for Reducing the Order of Linear Time-Invariant Systems
by Weixiang Xie and Jie Song
Mathematics 2024, 12(19), 2991; https://doi.org/10.3390/math12192991 - 25 Sep 2024
Viewed by 235
Abstract
Multiple-input multiple-output (MIMO) linear time-invariant (LTI) systems exhibit enormous computational costs for high-dimensional problems. To address this problem, we propose a novel approach for reducing the dimensionality of MIMO systems. The method leverages the Takenaka–Malmquist basis and incorporates the strictly complex-valued relevant vector [...] Read more.
Multiple-input multiple-output (MIMO) linear time-invariant (LTI) systems exhibit enormous computational costs for high-dimensional problems. To address this problem, we propose a novel approach for reducing the dimensionality of MIMO systems. The method leverages the Takenaka–Malmquist basis and incorporates the strictly complex-valued relevant vector machine (SCRVM). We refer to this method as covariance-free maximum likelihood (CoFML). The proposed method avoids the explicit computation of the covariance matrix. CoFML solves multiple linear systems to obtain the required posterior statistics for covariance. This is achieved by exploiting the preconditioning matrix and the matrix diagonal element estimation rule. We provide theoretical justification for this approximation and show why our method scales well in high-dimensional settings. By employing the CoFML algorithm, we approximate MIMO systems in parallel, resulting in significant computational time savings. The effectiveness of this method is demonstrated through three well-known examples. Full article
(This article belongs to the Special Issue Applied Mathematics in Data Science and High-Performance Computing)
Show Figures

Figure 1

17 pages, 411 KiB  
Article
Zero-Inflated Binary Classification Model with Elastic Net Regularization
by Hua Xin, Yuhlong Lio, Hsien-Ching Chen and Tzong-Ru Tsai
Mathematics 2024, 12(19), 2990; https://doi.org/10.3390/math12192990 - 25 Sep 2024
Viewed by 198
Abstract
Zero inflation and overfitting can reduce the accuracy rate of using machine learning models for characterizing binary data sets. A zero-inflated Bernoulli (ZIBer) model can be the right model to characterize zero-inflated binary data sets. When the ZIBer model is used to characterize [...] Read more.
Zero inflation and overfitting can reduce the accuracy rate of using machine learning models for characterizing binary data sets. A zero-inflated Bernoulli (ZIBer) model can be the right model to characterize zero-inflated binary data sets. When the ZIBer model is used to characterize zero-inflated binary data sets, overcoming the overfitting problem is still an open question. To improve the overfitting problem for using the ZIBer model, the minus log-likelihood function of the ZIBer model with the elastic net regularization rule for an overfitting penalty is proposed as the loss function. An estimation procedure to minimize the loss function is developed in this study using the gradient descent method (GDM) with the momentum term as the learning rate. The proposed estimation method has two advantages. First, the proposed estimation method can be a general method that simultaneously uses L1- and L2-norm terms for penalty and includes the ridge and least absolute shrinkage and selection operator methods as special cases. Second, the momentum learning rate can accelerate the convergence of the GDM and enhance the computation efficiency of the proposed estimation procedure. The parameter selection strategy is studied, and the performance of the proposed method is evaluated using Monte Carlo simulations. A diabetes example is used as an illustration. Full article
(This article belongs to the Special Issue Current Developments in Theoretical and Applied Statistics)
Show Figures

Figure A1

26 pages, 2093 KiB  
Article
Assessment of Femoral Head Sphericity Using Coordinate Data through Modified Differential Evolution Approach
by Syed Hammad Mian, Zeyad Almutairi and Mohamed K. Aboudaif
Mathematics 2024, 12(19), 2989; https://doi.org/10.3390/math12192989 - 25 Sep 2024
Viewed by 196
Abstract
Coordinate measuring machines (CMMs) are utilized to acquire coordinate data from manufactured surfaces for inspection reasons. These data are employed to gauge the geometric form errors associated with the surface. An optimization procedure of fitting a substitute surface to the measured points is [...] Read more.
Coordinate measuring machines (CMMs) are utilized to acquire coordinate data from manufactured surfaces for inspection reasons. These data are employed to gauge the geometric form errors associated with the surface. An optimization procedure of fitting a substitute surface to the measured points is applied to assess the form error. Since the traditional least-squares approach is susceptible to overestimation, it leads to unreasonable rejections. This paper implements a modified differential evolution (DE) algorithm to estimate the minimum zone femoral head sphericity. In this algorithm, opposition-based learning is considered for population initialization, and an adaptive scheme is enacted for scaling factor and crossover probability. The coefficients of the correlation factor and the uncertainty propagation are also measured so that the result’s uncertainty can be determined. Undoubtedly, the credibility and plausibility of inspection outcomes are strengthened by evaluating measurement uncertainty. Several data sets are used to corroborate the outcome of the DE algorithm. CMM validation shows that the modified DE algorithm can measure sphericity with high precision and consistency. This algorithm allows for an adequate initial solution and adaptability to address a wide range of industrial problems. It ensures a proper balance between exploitation and exploration capabilities. Thus, the suggested methodology, based on the computational results, is feasible for the online deployment of the sphericity evaluation. The adopted DE strategy is simple to use, has few controlling variables, and is computationally less expensive. It guarantees a robust solution and can be used to compute different form errors. Full article
Show Figures

Figure 1

20 pages, 1029 KiB  
Article
Tensor Network Space-Time Spectral Collocation Method for Time-Dependent Convection-Diffusion-Reaction Equations
by Dibyendu Adak, Duc P. Truong, Gianmarco Manzini, Kim Ø. Rasmussen and Boian S. Alexandrov
Mathematics 2024, 12(19), 2988; https://doi.org/10.3390/math12192988 - 25 Sep 2024
Viewed by 225
Abstract
Emerging tensor network techniques for solutions of partial differential equations (PDEs), known for their ability to break the curse of dimensionality, deliver new mathematical methods for ultra-fast numerical solutions of high-dimensional problems. Here, we introduce a Tensor Train (TT) Chebyshev spectral collocation method, [...] Read more.
Emerging tensor network techniques for solutions of partial differential equations (PDEs), known for their ability to break the curse of dimensionality, deliver new mathematical methods for ultra-fast numerical solutions of high-dimensional problems. Here, we introduce a Tensor Train (TT) Chebyshev spectral collocation method, in both space and time, for the solution of the time-dependent convection-diffusion-reaction (CDR) equation with inhomogeneous boundary conditions, in Cartesian geometry. Previous methods for numerical solution of time-dependent PDEs often used finite difference for time, and a spectral scheme for the spatial dimensions, which led to a slow linear convergence. Spectral collocation space-time methods show exponential convergence; however, for realistic problems they need to solve large four-dimensional systems. We overcome this difficulty by using a TT approach, as its complexity only grows linearly with the number of dimensions. We show that our TT space-time Chebyshev spectral collocation method converges exponentially, when the solution of the CDR is smooth, and demonstrate that it leads to a very high compression of linear operators from terabytes to kilobytes in TT-format, and a speedup of tens of thousands of times when compared to a full-grid space-time spectral method. These advantages allow us to obtain the solutions at much higher resolutions. Full article
(This article belongs to the Section Computational and Applied Mathematics)
Show Figures

Figure 1

17 pages, 450 KiB  
Article
Equilibrium Strategies for Overtaking-Free Queueing Networks under Partial Information
by David Barbato, Alberto Cesaro and Bernardo D’Auria
Mathematics 2024, 12(19), 2987; https://doi.org/10.3390/math12192987 - 25 Sep 2024
Viewed by 170
Abstract
We investigate the equilibrium strategies for customers arriving at overtaking-free queueing networks and receiving partial information about the system’s state. In an overtaking-free network, customers cannot be overtaken by others arriving after them. We assume that customer arrivals follow a Poisson process and [...] Read more.
We investigate the equilibrium strategies for customers arriving at overtaking-free queueing networks and receiving partial information about the system’s state. In an overtaking-free network, customers cannot be overtaken by others arriving after them. We assume that customer arrivals follow a Poisson process and that service times at any queue are independent and exponentially distributed. Upon arrival, the received partial information is the total number of customers already in the network; however, the distribution of these among the queues is left unknown. Adding rewards for being served and costs for waiting, we analyze the economic behavior of this system, looking for equilibrium threshold strategies. The overtaking-free characteristic allows for coupling of its dynamics with those of corresponding closed Jackson networks, for which an algorithm to compute the expected sojourn times is known. We exploit this feature to compute the profit function and prove the existence of equilibrium threshold strategies. We also illustrate the results by analyzing and comparing two simple network structures. Full article
(This article belongs to the Special Issue Advances in Queueing Theory and Applications)
14 pages, 337 KiB  
Article
Norm Estimates for Remainders of Noncommutative Taylor Approximations for Laplace Transformers Defined by Hyperaccretive Operators
by Danko R. Jocić
Mathematics 2024, 12(19), 2986; https://doi.org/10.3390/math12192986 - 25 Sep 2024
Viewed by 224
Abstract
Let H be a separable complex Hilbert space, B(H) the algebra of bounded linear operators on H, μ a finite Borel measure on R+ with the finite (n+1)-th moment, [...] Read more.
Let H be a separable complex Hilbert space, B(H) the algebra of bounded linear operators on H, μ a finite Borel measure on R+ with the finite (n+1)-th moment, f(z):=R+etzdμ(t) for all z0,CΨ(H), and ||·||Ψ the ideal of compact operators and the norm associated to a symmetrically norming function Ψ, respectively. If A,DB(H) are accretive, then the Laplace transformer on B(H),XR+etAXetDdμ(t) is well defined for any XB(H) as is the newly introduced Taylor remainder transformer Rn(f;D,A)X:=f(A)Xk=0n1k!i=0k(1)ikiAkiXDif(k)(D). If A,D* are also (n+1)-accretive, k=0n+1(1)kn+1kAn+1kXDkCΨ(H) and ||·||Ψ is Q* norm, then ||·||Ψ norm estimates for k=0n+1n+1kAkAn+1k12Rn(f;D,A)Xk=0n+1n+1kDn+1kD*k12 are obtained as the spacial cases of the presented estimates for (also newly introduced) Taylor remainder transformers related to a pair of Laplace transformers, defined by a subclass of accretive operators. Full article
16 pages, 1683 KiB  
Article
Projection-Uniform Subsampling Methods for Big Data
by Yuxin Sun, Wenjun Liu and Ye Tian
Mathematics 2024, 12(19), 2985; https://doi.org/10.3390/math12192985 - 25 Sep 2024
Viewed by 152
Abstract
The idea of experimental design has been widely used in subsampling algorithms to extract a small portion of big data that carries useful information for statistical modeling. Most existing subsampling algorithms of this kind are model-based and designed to achieve the corresponding optimality [...] Read more.
The idea of experimental design has been widely used in subsampling algorithms to extract a small portion of big data that carries useful information for statistical modeling. Most existing subsampling algorithms of this kind are model-based and designed to achieve the corresponding optimality criteria for the model. However, data generating models are frequently unknown or complicated. Model-free subsampling algorithms are needed for obtaining samples that are robust under model misspecification and complication. This paper introduces two novel algorithms, called the Projection-Uniform Subsampling algorithm and its extension. Both algorithms aim to extract a subset of samples from big data that are space-filling in low-dimensional projections. We show that subdata obtained from our algorithms perform superiorly under the uniform projection criterion and centered L2-discrepancy. Comparisons among our algorithms, model-based and model-free methods are conducted through two simulation studies and two real-world case studies. We demonstrate the robustness of our proposed algorithms in building statistical models in scenarios involving model misspecification and complication. Full article
(This article belongs to the Special Issue Advances in Statistical AI and Casual Inference)
Show Figures

Figure 1

18 pages, 3636 KiB  
Article
Magnetotelluric Forward Modeling Using a Non-Uniform Grid Finite Difference Method
by Hui Zhang and Fajian Nie
Mathematics 2024, 12(19), 2984; https://doi.org/10.3390/math12192984 - 25 Sep 2024
Viewed by 215
Abstract
Magnetotelluric (MT) forward modeling is essential in geophysical exploration, enabling the investigation of the Earth’s subsurface electrical conductivity. Traditional finite difference methods (FDMs) typically use uniform grids, which can be computationally inefficient and fail to accurately capture complex geological structures. This study addresses [...] Read more.
Magnetotelluric (MT) forward modeling is essential in geophysical exploration, enabling the investigation of the Earth’s subsurface electrical conductivity. Traditional finite difference methods (FDMs) typically use uniform grids, which can be computationally inefficient and fail to accurately capture complex geological structures. This study addresses these challenges by introducing a non-uniform grid-based FDM for MT forward modeling. The proposed method optimizes computational resources by varying grid resolution, offering finer grids in areas with complex geology and coarser grids in more homogeneous regions. We apply this method to both typical synthetic models and a complex fault structure case study, demonstrating its capability to accurately resolve subsurface features while reducing computational costs. The results highlight the method’s effectiveness in capturing fine-scale details that are often missed by uniform grid approaches. The conclusions drawn from this study suggest that the non-uniform grid FDM not only improves the accuracy of MT modeling but also enhances its efficiency, making it a valuable tool for geophysical exploration in challenging environments. Full article
(This article belongs to the Topic Analytical and Numerical Models in Geo-Energy)
19 pages, 11976 KiB  
Article
Synchronization of Chaotic Extremum-Coded Random Number Generators and Its Application to Segmented Image Encryption
by Shunsuke Araki, Ji-Han Wu and Jun-Juh Yan
Mathematics 2024, 12(19), 2983; https://doi.org/10.3390/math12192983 - 25 Sep 2024
Viewed by 191
Abstract
This paper proposes a highly secure image encryption technique based on chaotic synchronization. Firstly, through the design of a synchronization controller, we ensure that the master–slave chaotic extremum-coded random number generators (ECRNGs) embedded in separated transmitters and receivers are fully synchronized to provide [...] Read more.
This paper proposes a highly secure image encryption technique based on chaotic synchronization. Firstly, through the design of a synchronization controller, we ensure that the master–slave chaotic extremum-coded random number generators (ECRNGs) embedded in separated transmitters and receivers are fully synchronized to provide synchronized dynamic random sequences for image encryption applications. Next, combining these synchronized chaotic sequences with the AES encryption algorithm, we propose an image segmentation and multi-encryption method to enhance the security of encrypted images and realize a secure image transmission system. Notably, in the design of the synchronization controller, the transient time before complete synchronization between the master and slave ECRNGs is effectively controlled by specifying the eigenvalues of the matrix in the synchronization error dynamics. Research results in this paper also show that complete synchronization of ECRNGs can be achieved within a single sampling time, which significantly contributes to the time efficiency of the image transmission system. As for the image encryption technique, we propose the method of image segmentation and use the synchronized dynamic random sequences generated by the ECRNGs to produce the keys and initialization vectors (IVs) required for AES-CBC image encryption, greatly enhancing the security of the encrypted images. To highlight the contribution of the proposed segmented image encryption, statistical analyses are conducted on the encrypted images, including histogram analysis (HA), information entropy (IE), correlation coefficient analysis (CCA), number of pixels change rate (NPCR), and unified average changing intensity (UACI), and compared with existing literature. The comparative results fully demonstrate that the proposed encryption method significantly enhances image encryption performance. Finally, under the network transmission control protocol (TCP), the synchronization of ECRNGs, dynamic keys, and IVs is implemented as well as segmented image encryption and transmission, and a highly secure image transmission system is realized to validate the practicality and feasibility of our design. Full article
(This article belongs to the Special Issue New Advances in Coding Theory and Cryptography, 2nd Edition)
Show Figures

Figure 1

Previous Issue
Back to TopTop