Next Issue
Volume 11, October-2
Previous Issue
Volume 11, September-2
 
 

Mathematics, Volume 11, Issue 19 (October-1 2023) – 202 articles

Cover Story (view full-size image): This paper describes the influence of social contagion in the formation of financial bubbles with a mathematical model. The dynamics of the proposed model replicate historical bubble price trends based on the spread of optimism and pessimism among investors, which in turn controls the supply and demand in the market. The unsustainable growth phase in the early stages of a bubble is driven by large numbers of optimists/bulls, while the abrupt collapse phase is driven by an increasing number of pessimists/bears. The market eventually returns to rational equilibrium at the fundamental asset value. This study reinforces the central role of behavioral phenomena in the life cycle of asset bubbles. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 1646 KiB  
Article
A Hybrid Model to Explore the Barriers to Enterprise Energy Storage System Adoption
by James J. H. Liou, Peace Y. L. Liu and Sun-Weng Huang
Mathematics 2023, 11(19), 4223; https://doi.org/10.3390/math11194223 - 9 Oct 2023
Viewed by 1537
Abstract
Using green energy is an important way for businesses to achieve their ESG goals and ensure sustainable operations. Currently, however, green energy is not a stable source of power, and this instability poses certain risks to normal business operations and manufacturing processes. The [...] Read more.
Using green energy is an important way for businesses to achieve their ESG goals and ensure sustainable operations. Currently, however, green energy is not a stable source of power, and this instability poses certain risks to normal business operations and manufacturing processes. The installation of energy storage equipment has become an indispensable accompaniment to facilitating green energy use for an enterprise. However, businesses may encounter significant barriers during the process of installing energy storage equipment. This study aims to explore and discern the key barrier factors that influence the assessment and decision-making process of installing energy storage equipment. A hybrid approach combining the Decision-making and Trial Evaluation Laboratory (DEMATEL) and Interpretive Structural Modeling (ISM) is developed to explore the causality relationships and degrees of influence among these key factors. The Z-number and Rough Dombi Weighted Geometric Averaging (RDWGA) methods are also utilized to integrate the experts’ varied opinions and uncertain judgements. Finally, recommendations are provided based on the results to assist businesses to make informed decisions while evaluating the installation of energy storage equipment, to ensure a stable and uninterrupted supply of green energy for use in normal operations. Full article
(This article belongs to the Special Issue Multi-criteria Decision Making and Data Mining, 2nd Edition)
Show Figures

Figure 1

38 pages, 4712 KiB  
Article
Large-Scale Simulation of Shor’s Quantum Factoring Algorithm
by Dennis Willsch, Madita Willsch, Fengping Jin, Hans De Raedt and Kristel Michielsen
Mathematics 2023, 11(19), 4222; https://doi.org/10.3390/math11194222 - 9 Oct 2023
Cited by 4 | Viewed by 4612
Abstract
Shor’s factoring algorithm is one of the most anticipated applications of quantum computing. However, the limited capabilities of today’s quantum computers only permit a study of Shor’s algorithm for very small numbers. Here, we show how large GPU-based supercomputers can be used to [...] Read more.
Shor’s factoring algorithm is one of the most anticipated applications of quantum computing. However, the limited capabilities of today’s quantum computers only permit a study of Shor’s algorithm for very small numbers. Here, we show how large GPU-based supercomputers can be used to assess the performance of Shor’s algorithm for numbers that are out of reach for current and near-term quantum hardware. First, we study Shor’s original factoring algorithm. While theoretical bounds suggest success probabilities of only 3–4%, we find average success probabilities above 50%, due to a high frequency of “lucky” cases, defined as successful factorizations despite unmet sufficient conditions. Second, we investigate a powerful post-processing procedure, by which the success probability can be brought arbitrarily close to one, with only a single run of Shor’s quantum algorithm. Finally, we study the effectiveness of this post-processing procedure in the presence of typical errors in quantum processing hardware. We find that the quantum factoring algorithm exhibits a particular form of universality and resilience against the different types of errors. The largest semiprime that we have factored by executing Shor’s algorithm on a GPU-based supercomputer, without exploiting prior knowledge of the solution, is 549,755,813,701 = 712,321 × 771,781. We put forward the challenge of factoring, without oversimplification, a non-trivial semiprime larger than this number on any quantum computing device. Full article
(This article belongs to the Special Issue Mathematical Perspectives on Quantum Computing and Communication)
Show Figures

Figure 1

22 pages, 4328 KiB  
Article
Industrial Application of the ANFIS Algorithm—Customer Satisfaction Assessment in the Dairy Industry
by Nikolina Ljepava, Aleksandar Jovanović and Aleksandar Aleksić
Mathematics 2023, 11(19), 4221; https://doi.org/10.3390/math11194221 - 9 Oct 2023
Cited by 2 | Viewed by 1473
Abstract
As a part of the food industry, the dairy industry is one of the most important sectors of the process industry, keeping in mind the number of employees in that sector, the share in the total industrial production, and the overall value added. [...] Read more.
As a part of the food industry, the dairy industry is one of the most important sectors of the process industry, keeping in mind the number of employees in that sector, the share in the total industrial production, and the overall value added. Many strategies have been developed over time to satisfy customer needs and assess customer satisfaction. This paper proposes an innovative model based on adaptive neuro-fuzzy inference system (ANFIS) and elements of the ACSI (American customer satisfaction index) for assessing and monitoring the level of customer satisfaction in a dairy manufacturing company where there are no large seasonal variations. In terms of an innovative approach, the base of fuzzy logic rules is determined by applying the fuzzy Delphi technique for the application of the ANFIS algorithm and assessment of customer satisfaction. The verification of the model is delivered by testing a real sample from a company of the dairy industry. As decisions on the strategic company level may be impacted by customer satisfaction, the company management should choose the most precise methodology for customer satisfaction assessment. The results are compared with other methods in terms of mean absolute deviation (MAD), mean squared error (MSE), and mean absolute percentage error (MAPE). Results show that ANFIS outperformed other methods used for assessing the level of customer satisfaction, such as case-based reasoning and multiple linear regression. Full article
Show Figures

Figure 1

14 pages, 552 KiB  
Article
Application of LADMM and As-LADMM for a High-Dimensional Partially Linear Model
by Aifen Feng, Xiaogai Chang, Jingya Fan and Zhengfen Jin
Mathematics 2023, 11(19), 4220; https://doi.org/10.3390/math11194220 - 9 Oct 2023
Viewed by 925
Abstract
This paper mainly studies the application of the linearized alternating direction method of multiplier (LADMM) and the accelerated symmetric linearized alternating direction method of multipliers (As-LADMM) for high dimensional partially linear models. First, we construct a l1-penalty for the least squares [...] Read more.
This paper mainly studies the application of the linearized alternating direction method of multiplier (LADMM) and the accelerated symmetric linearized alternating direction method of multipliers (As-LADMM) for high dimensional partially linear models. First, we construct a l1-penalty for the least squares estimation of partially linear models under constrained contours. Next, we design the LADMM algorithm to solve the model, in which the linearization technique is introduced to linearize one of the subproblems to obtain an approximate solution. Furthermore, we add the appropriate acceleration techniques to form the As-LADMM algorithm and to solve the model. Then numerical simulations are conducted to compare and analyze the effectiveness of the algorithms. It indicates that the As-LADMM algorithm is better than the LADMM algorithm from the view of the mean squared error, the number of iterations and the running time of the algorithm. Finally, we apply them to the practical problem of predicting Boston housing price data analysis. This indicates that the loss between the predicted and actual values is relatively small, and the As-LADMM algorithm has a good prediction effect. Full article
(This article belongs to the Section Computational and Applied Mathematics)
Show Figures

Figure 1

15 pages, 1167 KiB  
Article
Triclustering Implementation Using Hybrid δ-Trimax Particle Swarm Optimization and Gene Ontology Analysis on Three-Dimensional Gene Expression Data
by Titin Siswantining, Maria Armelia Sekar Istianingrum, Saskya Mary Soemartojo, Devvi Sarwinda, Noval Saputra, Setia Pramana and Rully Charitas Indra Prahmana
Mathematics 2023, 11(19), 4219; https://doi.org/10.3390/math11194219 - 9 Oct 2023
Viewed by 1388
Abstract
Triclustering is a data mining method for grouping data based on similar characteristics. The main purpose of a triclustering analysis is to obtain an optimal tricluster, which has a minimum mean square residue (MSR) and a maximum tricluster volume. The triclustering method has [...] Read more.
Triclustering is a data mining method for grouping data based on similar characteristics. The main purpose of a triclustering analysis is to obtain an optimal tricluster, which has a minimum mean square residue (MSR) and a maximum tricluster volume. The triclustering method has been developed using many approaches, such as an optimization method. In this study, hybrid δ-Trimax particle swarm optimization was proposed for use in a triclustering analysis. In general, hybrid δ-Trimax PSO consist of two phases: initialization of the population using a node deletion algorithm in the δ-Trimax method and optimization of the tricluster using the binary PSO method. This method, when implemented on three-dimensional gene expression data, proved useful as a Motexafin gadolinium (MGd) treatment for plateau phase lung cancer cells. For its implementation, a tricluster that potentially consisted of a group of genes with high specific response to MGd was obtained. This type of tricluster can then serve as a guideline for further research related to the development of MGd drugs as anti-cancer therapy. Full article
Show Figures

Figure 1

12 pages, 286 KiB  
Article
Pontryagin Maximum Principle for Incommensurate Fractional-Orders Optimal Control Problems
by Faïçal Ndaïrou and Delfim F. M. Torres
Mathematics 2023, 11(19), 4218; https://doi.org/10.3390/math11194218 - 9 Oct 2023
Cited by 1 | Viewed by 1545
Abstract
We introduce a new optimal control problem where the controlled dynamical system depends on multi-order (incommensurate) fractional differential equations. The cost functional to be maximized is of Bolza type and depends on incommensurate Caputo fractional-orders derivatives. We establish continuity and differentiability of the [...] Read more.
We introduce a new optimal control problem where the controlled dynamical system depends on multi-order (incommensurate) fractional differential equations. The cost functional to be maximized is of Bolza type and depends on incommensurate Caputo fractional-orders derivatives. We establish continuity and differentiability of the state solutions with respect to perturbed trajectories. Then, we state and prove a Pontryagin maximum principle for incommensurate Caputo fractional optimal control problems. Finally, we give an example, illustrating the applicability of our Pontryagin maximum principle. Full article
(This article belongs to the Special Issue Recent Research on Fractional Calculus: Theory and Applications)
14 pages, 568 KiB  
Article
Novel Kinds of Fractional λ–Kinetic Equations Involving the Generalized Degenerate Hypergeometric Functions and Their Solutions Using the Pathway-Type Integral
by Mohammed Z. Alqarni and Mohamed Abdalla
Mathematics 2023, 11(19), 4217; https://doi.org/10.3390/math11194217 - 9 Oct 2023
Cited by 1 | Viewed by 1011
Abstract
In recent years, fractional kinetic equations (FKEs) involving various special functions have been widely used to describe and solve significant problems in control theory, biology, physics, image processing, engineering, astrophysics, and many others. This current work proposes a new solution to fractional [...] Read more.
In recent years, fractional kinetic equations (FKEs) involving various special functions have been widely used to describe and solve significant problems in control theory, biology, physics, image processing, engineering, astrophysics, and many others. This current work proposes a new solution to fractional λkinetic equations based on generalized degenerate hypergeometric functions (GDHFs), which has the potential to be applied to calculate changes in the chemical composition of stars such as the sun. Furthermore, this expanded form can also help to solve various problems with phenomena in physics, such as fractional statistical mechanics, anomalous diffusion, and fractional quantum mechanics. Moreover, some of the well-known outcomes are just special cases of this class of pathway-type solutions involving GDHFs, with greater accuracy, while providing an easily calculable solution. Additionally, numerical graphs of these analytical solutions, using MATLAB Software (latest version 2023b), are also considered. Full article
Show Figures

Figure 1

22 pages, 456 KiB  
Article
Young Duality for Variational Inequalities and Nonparametric Method of Demand Analysis in Input–Output Models with Inputs Substitution: Application for Kazakhstan Economy
by Seyit Kerimkhulle, Nataliia Obrosova, Alexander Shananin and Akylbek Tokhmetov
Mathematics 2023, 11(19), 4216; https://doi.org/10.3390/math11194216 - 9 Oct 2023
Cited by 36 | Viewed by 1384
Abstract
The global macroeconomic shocks of the last decade entail the restructuring of national production networks and induce processes of input substitution. We suggest mathematical tools of Young duality for variational inequalities for studying these processes. Based on the tools we provide, a new [...] Read more.
The global macroeconomic shocks of the last decade entail the restructuring of national production networks and induce processes of input substitution. We suggest mathematical tools of Young duality for variational inequalities for studying these processes. Based on the tools we provide, a new mathematical model of a production network with several final consumers is created. The model is formulated as a pair of conjugated problems: a complementarity problem for optimal resource allocation with neoclassical production functions and the Young dual problem for equilibrium price indices on network products. The solution of these problems gives an equilibrium point in the space of network inter-industry flows and price indices on goods. Based on our previous results, we suggest an algorithm for model identification with an official economic statistic in the case of constant elasticity of substitution production functions. We give an explicit solution to the complementarity problems in this case and develop the algorithm of the inter-industry flows scenario projection. Since the algorithm needs the scenario projection of final sales structure as its input, we suggest a modified methodology that allows the calculation of scenario shifts in final consumer spending. To do this, we employ the generalized nonparametric method of demand analysis. As a result, we develop new technology for scenario calculation of a national input–output table, including shifts in final consumer spending. The technology takes into account a substitution of inputs in the network and is based on officially published national statistics data. The application of the methodology to study tax collection scenarios for Kazakhstan’s production network is demonstrated. Full article
(This article belongs to the Special Issue Mathematical Programming, Optimization and Operations Research)
Show Figures

Figure 1

21 pages, 5494 KiB  
Article
Kinematic Analysis of a Tendon-Driven Hybrid Rigid–Flexible Four-Bar; Application to Optimum Dimensional Synthesis
by Alfonso Hernández, Aitor Muñoyerro, Mónica Urízar and Oscar Altuzarra
Mathematics 2023, 11(19), 4215; https://doi.org/10.3390/math11194215 - 9 Oct 2023
Cited by 1 | Viewed by 1172
Abstract
In design matters, mechanisms with deformable elements are a step behind those with rigid bars, particularly if dimensional synthesis is considered a fundamental part of mechanism design. For the purposes of this work, a hybrid rigid–flexible four-bar mechanism has been chosen, the input [...] Read more.
In design matters, mechanisms with deformable elements are a step behind those with rigid bars, particularly if dimensional synthesis is considered a fundamental part of mechanism design. For the purposes of this work, a hybrid rigid–flexible four-bar mechanism has been chosen, the input bar being a continuum tendon of constant curvature. The coupler curves are noticeably more complex but offer more possibilities than the classical rigid four-bar counterpart. One of the objectives of this work is to completely characterize the coupler curves of this hybrid rigid–flexible mechanism, determining the number and type of circuits as well as constituent branches. Another important aim is to apply optimization techniques to the dimensional synthesis of path generation. Considerable progress in finding the best design solutions can be obtained if all the acquired knowledge about the coupler curves of this hybrid mechanism is integrated into the optimization algorithm. Full article
(This article belongs to the Section Engineering Mathematics)
Show Figures

Figure 1

20 pages, 500 KiB  
Article
Dual-Neighborhood Search for Solving the Minimum Dominating Tree Problem
by Ze Pan, Xinyun Wu and Caiquan Xiong
Mathematics 2023, 11(19), 4214; https://doi.org/10.3390/math11194214 - 9 Oct 2023
Viewed by 1045
Abstract
The minimum dominating tree (MDT) problem consists of finding a minimum weight subgraph from an undirected graph, such that each vertex not in this subgraph is adjacent to at least one of the vertices in it, and the subgraph is connected without any [...] Read more.
The minimum dominating tree (MDT) problem consists of finding a minimum weight subgraph from an undirected graph, such that each vertex not in this subgraph is adjacent to at least one of the vertices in it, and the subgraph is connected without any ring structures. This paper presents a dual-neighborhood search (DNS) algorithm for solving the MDT problem, which integrates several distinguishing features, such as two neighborhoods collaboratively working for optimizing the objective function, a fast neighborhood evaluation method to boost the searching effectiveness, and several diversification techniques to help the searching process jump out of the local optimum trap thus obtaining better solutions. DNS improves the previous best-known results for four public benchmark instances while providing competitive results for the remaining ones. Several ingredients of DNS are investigated to demonstrate the importance of the proposed ideas and techniques. Full article
(This article belongs to the Special Issue Advanced Graph Theory and Combinatorics)
Show Figures

Figure 1

12 pages, 273 KiB  
Article
Feedback Control Techniques for a Discrete Dynamic Macroeconomic Model with Extra Taxation: An Algebraic Algorithmic Approach
by Stelios Kotsios
Mathematics 2023, 11(19), 4213; https://doi.org/10.3390/math11194213 - 9 Oct 2023
Viewed by 833
Abstract
In this paper, a model matching feedback law design technique is applied to a macroeconomical model. We calculate, using computational algebra methodology, which paths of government expenditure and extra taxation will lead the system to a desired dynamic behavior. The solution is based [...] Read more.
In this paper, a model matching feedback law design technique is applied to a macroeconomical model. We calculate, using computational algebra methodology, which paths of government expenditure and extra taxation will lead the system to a desired dynamic behavior. The solution is based on algebraic methods and the development, in computer algebra software, of appropriate symbolic algorithms that produce a class of feedback laws as solutions. A method for solving a linear algebraic system of polynomials equations is provided, as well as its application to the feedback law design. Full article
(This article belongs to the Special Issue Latest Advances in Mathematical Economics)
19 pages, 2691 KiB  
Article
Autoregression, First Order Phase Transition, and Stochastic Resonance: A Comparison of Three Models for Forest Insect Outbreaks
by Vladislav Soukhovolsky, Anton Kovalev, Yulia Ivanova and Olga Tarasova
Mathematics 2023, 11(19), 4212; https://doi.org/10.3390/math11194212 - 9 Oct 2023
Viewed by 995
Abstract
Three models of abundance dynamics for forest insects that depict the development of outbreak populations were analyzed. We studied populations of the Siberian silkmoth Dendrolimus sibiricus Tschetv. in Siberia and the Far East of Russia, as well as a population of the pine [...] Read more.
Three models of abundance dynamics for forest insects that depict the development of outbreak populations were analyzed. We studied populations of the Siberian silkmoth Dendrolimus sibiricus Tschetv. in Siberia and the Far East of Russia, as well as a population of the pine looper Bupalus piniarius L. in Thuringia, Germany. The first model (autoregression) characterizes the mechanism where current population density is dependent on population densities in previous k years. The second model considers an outbreak as analogous to a first-order phase transition in physical systems and characterizes the outbreak as a transition through a potential barrier from a low-density state to a high-density state. The third model treats an outbreak as an effect of stochastic resonance influenced by a cyclical factor such as solar activity and the “noise” of weather parameters. The discussion focuses on the prediction effectiveness of abundance dynamics and outbreak development for each model. Full article
Show Figures

Figure 1

25 pages, 5960 KiB  
Article
Cooperative Guidance Strategy for Active Spacecraft Protection from a Homing Interceptor via Deep Reinforcement Learning
by Weilin Ni, Jiaqi Liu, Zhi Li, Peng Liu and Haizhao Liang
Mathematics 2023, 11(19), 4211; https://doi.org/10.3390/math11194211 - 9 Oct 2023
Cited by 1 | Viewed by 1231
Abstract
The cooperative active defense guidance problem for a spacecraft with active defense is investigated in this paper. An engagement between a spacecraft, an active defense vehicle, and an interceptor is considered, where the target spacecraft with active defense will attempt to evade the [...] Read more.
The cooperative active defense guidance problem for a spacecraft with active defense is investigated in this paper. An engagement between a spacecraft, an active defense vehicle, and an interceptor is considered, where the target spacecraft with active defense will attempt to evade the interceptor. Prior knowledge uncertainty and observation noise are taken into account simultaneously, which are vital for traditional guidance strategies such as the differential-game-based guidance method. In this set, we propose an intelligent cooperative active defense (ICAAI) guidance strategy based on deep reinforcement learning. ICAAI effectively coordinates defender and target maneuvers to achieve successful evasion with less prior knowledge and observational noise. Furthermore, we introduce an efficient and stable convergence (ESC) training approach employing reward shaping and curriculum learning to tackle the sparse reward problem in ICAAI training. Numerical experiments are included to demonstrate ICAAI’s real-time performance, convergence, adaptiveness, and robustness through the learning process and Monte Carlo simulations. The learning process showcases improved convergence efficiency with ESC, while simulation results illustrate ICAAI’s enhanced robustness and adaptiveness compared to optimal guidance laws. Full article
Show Figures

Figure 1

16 pages, 5927 KiB  
Article
Research on Precursor Information of Brittle Rock Failure through Acoustic Emission
by Weiguang Ren, Chaosheng Wang, Yang Zhao and Dongjie Xue
Mathematics 2023, 11(19), 4210; https://doi.org/10.3390/math11194210 - 9 Oct 2023
Cited by 1 | Viewed by 1029
Abstract
Dynamic failure of surrounding rock often causes many casualties and financial losses. Predicting the precursory characteristics of rock failure is of great significance in preventing and controlling the dynamic failure of surrounding rock. In this paper, a triaxial test of granite is carried [...] Read more.
Dynamic failure of surrounding rock often causes many casualties and financial losses. Predicting the precursory characteristics of rock failure is of great significance in preventing and controlling the dynamic failure of surrounding rock. In this paper, a triaxial test of granite is carried out, and the acoustic emission events are monitored during the test. The fractal characteristics of acoustic emission events’ energy distribution and time sequence are analyzed. The correlation dimension and the b value are used to study the size distribution and sequential characteristics. Furthermore, a rock failure prediction method is proposed. The correlation dimension is chosen as the main index and the b value is chosen as a secondary index for the precursor of granite failure. The study shows that: (1) The failure process can be divided into an initial stage, active stage, quiet stage, and failure stage. (2) The b value and correlation dimension both can describe the process of rock failure. There is a continuous decline before failure. Because of the complexity of the field, it is difficult to accurately estimate the stability of surrounding rock using a single index. (3) The combination of the b value and correlation dimension to establish a new method, which can accurately represent the stability of the surrounding rock. When the correlation dimension is increasing, the surrounding rock is stable with stress adjusting. When the correlation dimension is decreasing and the b value remains unchanged after briefly rising, the surrounding rock is stable, and stress is finished adjusting. When the correlation dimension and b value are both decreasing, the surrounding rock will be destroyed. Full article
Show Figures

Figure 1

13 pages, 300 KiB  
Article
On the Lifeline Game of the Inertial Players with Integral and Geometric Constraints
by Bahrom Samatov, Gafurjan Ibragimov, Bahodirjon Juraev and Massimiliano Ferrara
Mathematics 2023, 11(19), 4209; https://doi.org/10.3390/math11194209 - 9 Oct 2023
Viewed by 913
Abstract
In this paper, we consider a pursuit–evasion game of inertial players, where the pursuer’s control is subject to integral constraint and the evader’s control is subject to geometric constraint. In the pursuit problem, the main tool is the strategy of parallel pursuit. Sufficient [...] Read more.
In this paper, we consider a pursuit–evasion game of inertial players, where the pursuer’s control is subject to integral constraint and the evader’s control is subject to geometric constraint. In the pursuit problem, the main tool is the strategy of parallel pursuit. Sufficient conditions are obtained for the solvability of pursuit–evasion problems. Additionally, the main lemma describing the monotonicity of an attainability domain of the evader is proved, and an explicit analytical formula for this domain is given. One of the main results of the paper is the solution of the Isaacs lifeline game for a special case. Full article
(This article belongs to the Special Issue Differential Games and Its Applications, 2nd Edition)
18 pages, 4011 KiB  
Article
Deep Learning Algorithms for Behavioral Analysis in Diagnosing Neurodevelopmental Disorders
by Hasan Alkahtani, Zeyad A. T. Ahmed, Theyazn H. H. Aldhyani, Mukti E. Jadhav and Ahmed Abdullah Alqarni
Mathematics 2023, 11(19), 4208; https://doi.org/10.3390/math11194208 - 9 Oct 2023
Cited by 2 | Viewed by 2026
Abstract
Autism spectrum disorder (ASD), or autism, can be diagnosed based on a lack of behavioral skills and social communication. The most prominent method of diagnosing ASD in children is observing the child’s behavior, including some of the signs that the child repeats. Hand [...] Read more.
Autism spectrum disorder (ASD), or autism, can be diagnosed based on a lack of behavioral skills and social communication. The most prominent method of diagnosing ASD in children is observing the child’s behavior, including some of the signs that the child repeats. Hand flapping is a common stimming behavior in children with ASD. This research paper aims to identify children’s abnormal behavior, which might be a sign of autism, using videos recorded in a natural setting during the children’s regular activities. Specifically, this study seeks to classify self-stimulatory activities, such as hand flapping, as well as normal behavior in real-time. Two deep learning video classification methods are used to be trained on the publicly available Self-Stimulatory Behavior Dataset (SSBD). The first method is VGG-16-LSTM; VGG-16 to spatial feature extraction and long short-term memory networks (LSTM) for temporal features. The second method is a long-term recurrent convolutional network (LRCN) that learns spatial and temporal features immediately in end-to-end training. The VGG-16-LSTM achieved 0.93% on the testing set, while the LRCN model achieved an accuracy of 0.96% on the testing set. Full article
Show Figures

Figure 1

22 pages, 1324 KiB  
Article
Unit Exponential Probability Distribution: Characterization and Applications in Environmental and Engineering Data Modeling
by Hassan S. Bakouch, Tassaddaq Hussain, Marina Tošić, Vladica S. Stojanović and Najla Qarmalah
Mathematics 2023, 11(19), 4207; https://doi.org/10.3390/math11194207 - 9 Oct 2023
Cited by 11 | Viewed by 2029
Abstract
Distributions with bounded support show considerable sparsity over those with unbounded support, despite the fact that there are a number of real-world contexts where observations take values from a bounded range (proportions, percentages, and fractions are typical examples). For proportion modeling, a flexible [...] Read more.
Distributions with bounded support show considerable sparsity over those with unbounded support, despite the fact that there are a number of real-world contexts where observations take values from a bounded range (proportions, percentages, and fractions are typical examples). For proportion modeling, a flexible family of two-parameter distribution functions associated with the exponential distribution is proposed here. The mathematical and statistical properties of the novel distribution are examined, including the quantiles, mode, moments, hazard rate function, and its characterization. The parameter estimation procedure using the maximum likelihood method is carried out, and applications to environmental and engineering data are also considered. To this end, various statistical tests are used, along with some other information criterion indicators to determine how well the model fits the data. The proposed model is found to be the most efficient plan in most cases for the datasets considered. Full article
(This article belongs to the Special Issue New Advances in Distribution Theory and Its Applications)
Show Figures

Figure 1

19 pages, 2553 KiB  
Article
Methodology for Assessing the Risks of Regional Competitiveness Based on the Kolmogorov–Chapman Equations
by Galina Chernyshova, Irina Veshneva, Anna Firsova, Elena L. Makarova and Elena A. Makarova
Mathematics 2023, 11(19), 4206; https://doi.org/10.3390/math11194206 - 9 Oct 2023
Viewed by 962
Abstract
The relevance of research on competitiveness at the meso level is related to the contemporary views of a region as an essential element of the economic space. The development of forecasting and analytical methods at the regional level of the economy is a [...] Read more.
The relevance of research on competitiveness at the meso level is related to the contemporary views of a region as an essential element of the economic space. The development of forecasting and analytical methods at the regional level of the economy is a key task in the process of strategic decision making. This article proposes a method of quantitative assessment of the risks of regional competitiveness. The novelty of this approach is based on both a fixed-point risk assessment and scenario-based predictive analysis. A hierarchical structure of indicators of competitiveness of regions is offered. A method based on the Kolmogorov–Chapman equations was used for the predictive estimation of risks of regional competitiveness. The integrated risk assessment is performed using the modified fuzzy ELECTRE II method. A web application has been implemented to assess the risks of competitiveness of Russian regions. The functionality of this application provides the use of multi-criteria decision-making methods based on a fuzzy logic approach to estimate risks at a specified time, calculating the probability of risk events and their combinations in the following periods and visualizing the results. Approbation of the technique was carried out for 78 Russian regions for various scenarios. The analysis of the results obtained provides an opportunity to identify the riskiest factors of regional competitiveness and to distinguish regions with different risk levels. Full article
Show Figures

Figure 1

20 pages, 5974 KiB  
Article
Voiceprint Recognition under Cross-Scenario Conditions Using Perceptual Wavelet Packet Entropy-Guided Efficient-Channel-Attention–Res2Net–Time-Delay-Neural-Network Model
by Shuqi Wang, Huajun Zhang, Xuetao Zhang, Yixin Su and Zhenghua Wang
Mathematics 2023, 11(19), 4205; https://doi.org/10.3390/math11194205 - 9 Oct 2023
Cited by 2 | Viewed by 1425
Abstract
(1) Background: Voiceprint recognition technology uses individual vocal characteristics for identity authentication and faces many challenges in cross-scenario applications. The sound environment, device characteristics, and recording conditions in different scenarios cause changes in sound features, which, in turn, affect the accuracy of voiceprint [...] Read more.
(1) Background: Voiceprint recognition technology uses individual vocal characteristics for identity authentication and faces many challenges in cross-scenario applications. The sound environment, device characteristics, and recording conditions in different scenarios cause changes in sound features, which, in turn, affect the accuracy of voiceprint recognition. (2) Methods: Based on the latest trends in deep learning, this paper uses the perceptual wavelet packet entropy (PWPE) method to extract the basic voiceprint features of the speaker before using the efficient channel attention (ECA) block and the Res2Net block to extract deep features. The PWPE block removes the effect of environmental noise on voiceprint features, so the perceptual wavelet packet entropy-guided ECA–Res2Net–Time-Delay-Neural-Network (PWPE-ECA-Res2Net-TDNN) model shows an excellent robustness. The ECA-Res2Net-TDNN block uses temporal statistical pooling with a multi-head attention mechanism to weight frame-level audio features, resulting in a weighted average of the final representation of the speech-level feature vectors. The sub-center ArcFace loss function is used to enhance intra-class compactness and inter-class differences, avoiding classification via output value alone like the softmax loss function. Based on the aforementioned elements, the PWPE-ECA-Res2Net-TDNN model for speaker recognition is designed to extract speaker feature embeddings more efficiently in cross-scenario applications. (3) Conclusions: The experimental results demonstrate that, compared to the ECAPA-TDNN model using MFCC features, the PWPE-based ECAPA-TDNN model performs better in terms of cross-scene recognition accuracy, exhibiting a stronger robustness and better noise resistance. Furthermore, the model maintains a relatively short recognition time even under the highest recognition rate conditions. Finally, a set of ablation experiments targeting each module of the proposed model is conducted. The results indicate that each module contributes to an improvement in the recognition performance. Full article
Show Figures

Figure 1

16 pages, 4459 KiB  
Article
Research on Short-Term Passenger Flow Prediction of LSTM Rail Transit Based on Wavelet Denoising
by Qingliang Zhao, Xiaobin Feng, Liwen Zhang and Yiduo Wang
Mathematics 2023, 11(19), 4204; https://doi.org/10.3390/math11194204 - 9 Oct 2023
Cited by 2 | Viewed by 1637
Abstract
Urban rail transit offers advantages such as high safety, energy efficiency, and environmental friendliness. With cities rapidly expanding, travelers are increasingly using rail systems, heightening demands for passenger capacity and efficiency while also pressuring these networks. Passenger flow forecasting is an essential part [...] Read more.
Urban rail transit offers advantages such as high safety, energy efficiency, and environmental friendliness. With cities rapidly expanding, travelers are increasingly using rail systems, heightening demands for passenger capacity and efficiency while also pressuring these networks. Passenger flow forecasting is an essential part of transportation systems. Short-term passenger flow forecasting for rail transit can estimate future station volumes, providing valuable data to guide operations management and mitigate congestion. This paper investigates short-term forecasting for Suzhou’s Shantang Street station. Shantang Street’s high commercial presence and distinct weekday versus weekend ridership patterns make it an interesting test case, making it a representative subway station. Wavelet denoising and Long Short Term Memory (LSTM) were combined to predict short-term flows, comparing the results to those of standalone LSTM, Support Vector Regression (SVR), Artificial Neural Network (ANN), and Autoregressive Integrated Moving Average Model (ARIMA). This study illustrates that the algorithms adopted exhibit good performance for passenger prediction. The LSTM model with wavelet denoising proved most accurate, demonstrating applicability for short-term rail transit forecasting and practical significance. The research findings can provide fundamental recommendations for implementing appropriate passenger flow control measures at stations and offer effective references for predicting passenger flow and mitigating traffic pressure in various cities. Full article
Show Figures

Figure 1

27 pages, 2412 KiB  
Article
Sustainable Evaluation of Major Third-Party Logistics Providers: A Framework of an MCDM-Based Entropy Objective Weighting Method
by Chia-Nan Wang, Ngoc-Ai-Thy Nguyen and Thanh-Tuan Dang
Mathematics 2023, 11(19), 4203; https://doi.org/10.3390/math11194203 - 9 Oct 2023
Cited by 3 | Viewed by 2051
Abstract
This study aims to efficiently assist decision makers in evaluating global third-party logistics (3PL) providers from the perspectives of economic, social, and environmental sustainability and explore the determinants of the 3PL providers’ performance. In doing so, an integrated framework for an MCDM-based entropy [...] Read more.
This study aims to efficiently assist decision makers in evaluating global third-party logistics (3PL) providers from the perspectives of economic, social, and environmental sustainability and explore the determinants of the 3PL providers’ performance. In doing so, an integrated framework for an MCDM-based entropy objective weighting method is proposed for the first time in a logistics industry assessment. In the first stage, the entropy method defines the weight of the decision criteria based on real data collected from the top 15 global 3PL providers. This study lists the prominent quantitative evaluation criteria, taking into consideration the sustainability perspective. The advantage of the entropy method is that it reduces the subjective impact of decision makers and increases objectivity. In the second stage, the measurement of alternatives and ranking according to compromise solution (MARCOS) method is used to rank the 3PL providers according to their performance on the basis of these criteria. Sensitivity analysis and comparative analysis are implemented to validate the results. The current research work is devoted to the emerging research topic of sustainable development in the logistics industry and supply chain management. The proposed model identifies key performance indicators in the logistics industry and determines the most efficient 3PL providers. Consequently, the results show that the carbon dioxide emissions (20.50%) factor is the most important criterion for the competitiveness of global logistics companies. The results of this study can help inefficient 3PL providers make strategic decisions to improve their performance. However, this study only focuses on 15 companies due to a lack of data. The integration of these two techniques provides a novel way to evaluate global 3PL providers which has not been addressed in the logistics industry to date and as such remains a gap that needs to be investigated. Full article
Show Figures

Figure 1

22 pages, 7158 KiB  
Article
Study on the Stiffness and Dynamic Characteristics of a Bridge Approach Zone: Tests and Numerical Analyses
by Ping Hu, Wei Liu, Huo Liu, Leixue Wu, Yang Wang and Wei Guo
Mathematics 2023, 11(19), 4202; https://doi.org/10.3390/math11194202 - 8 Oct 2023
Viewed by 938
Abstract
This study focuses on the stiffness and dynamic characteristic rules of a bridge approach zone in a high-speed railway (HSR). Indoor and in situ tests were performed to explore the stiffness and dynamic characteristics of the roadbed filling. Based on the test results, [...] Read more.
This study focuses on the stiffness and dynamic characteristic rules of a bridge approach zone in a high-speed railway (HSR). Indoor and in situ tests were performed to explore the stiffness and dynamic characteristics of the roadbed filling. Based on the test results, an effective track-subgrade finite element model (FEM) of a high-speed train (HST) was established. The FEM simulated the train load and model boundaries based on the obtained loads and viscoelastic artificial boundaries. Suitable elements were then selected to simulate the various components of the system and the constraint equations were established and solved using multi-point constraints. The model was verified by comparing the time–history curve characteristics, the frequency-domain characteristics and the results obtained from different modeling methods with the measured results. The influence of stiffness on the dynamic characteristics of the bridge approach zone were subsequently analyzed based on the aforementioned tests and simulations. The results indicate that (i) the model produced reliable results using the proposed approach; (ii) the influence of train load on the embankment was generally reflected in the upper part of the structure, and thus, bed structures are recommended to be strengthened; and (iii) under stationarity, the stiffness ratio between the bridge and normal subgrade is recommended as 1:6, with a transition length of 25 m. Full article
Show Figures

Figure 1

35 pages, 968 KiB  
Review
From Constant to Rough: A Survey of Continuous Volatility Modeling
by Giulia Di Nunno, Kęstutis Kubilius, Yuliya Mishura and Anton Yurchenko-Tytarenko
Mathematics 2023, 11(19), 4201; https://doi.org/10.3390/math11194201 - 8 Oct 2023
Cited by 3 | Viewed by 2375
Abstract
In this paper, we present a comprehensive survey of continuous stochastic volatility models, discussing their historical development and the key stylized facts that have driven the field. Special attention is dedicated to fractional and rough methods: without advocating for either roughness or long [...] Read more.
In this paper, we present a comprehensive survey of continuous stochastic volatility models, discussing their historical development and the key stylized facts that have driven the field. Special attention is dedicated to fractional and rough methods: without advocating for either roughness or long memory, we outline the motivation behind them and characterize some landmark models. In addition, we briefly touch on the problem of VIX modeling and recent advances in the SPX-VIX joint calibration puzzle. Full article
(This article belongs to the Special Issue Probabilistic Models in Insurance and Finance)
Show Figures

Figure 1

40 pages, 10905 KiB  
Article
A Modified Gradient Search Rule Based on the Quasi-Newton Method and a New Local Search Technique to Improve the Gradient-Based Algorithm: Solar Photovoltaic Parameter Extraction
by Bushra Shakir Mahmood, Nazar K. Hussein, Mansourah Aljohani and Mohammed Qaraad
Mathematics 2023, 11(19), 4200; https://doi.org/10.3390/math11194200 - 8 Oct 2023
Cited by 1 | Viewed by 1282
Abstract
Harnessing solar energy efficiently via photovoltaic (PV) technology is pivotal for future sustainable energy. Accurate modeling of PV cells entails an optimization problem due to the multimodal and nonlinear characteristics of the cells. This study introduces the Multi-strategy Gradient-Based Algorithm (MAGBO) for the [...] Read more.
Harnessing solar energy efficiently via photovoltaic (PV) technology is pivotal for future sustainable energy. Accurate modeling of PV cells entails an optimization problem due to the multimodal and nonlinear characteristics of the cells. This study introduces the Multi-strategy Gradient-Based Algorithm (MAGBO) for the precise parameter estimation of solar PV systems. MAGBO incorporates a modified gradient search rule (MGSR) inspired by the quasi-Newton approach, a novel refresh operator (NRO) for improved solution quality, and a crossover mechanism balancing exploration and exploitation. Validated through CEC2021 test functions, MAGBO excelled in global optimization. To further validate and underscore the reliability of MAGBO, we utilized data from the PVM 752 GaAs thin-film cell and the STP6-40/36 module. The simulation parameters were discerned using 44 I-V pairs from the PVM 752 cell and diverse data from the STP6-40/36 module tested under different conditions. Consistency between simulated and observed I-V and P-V curves for the STM6-40/36 and PVM 752 models validated MAGBO’s accuracy. In application, MAGBO attained an RMSE of 9.8 × 10−4 for double-diode and single-diode modules. For Photowatt-PWP, STM6-40/36, and PVM 752 models, RMSEs were 2.4 × 10−3, 1.7 × 10−3, and 1.7 × 10−3, respectively. Against prevalent methods, MAGBO exhibited unparalleled precision and reliability, advocating its superior utility for intricate PV data analysis. Full article
Show Figures

Figure 1

21 pages, 908 KiB  
Article
Dynamic Behavior of a Stochastic Avian Influenza Model with Two Strains of Zoonotic Virus
by Lili Kong, Luping Li, Shugui Kang and Fu Chen
Mathematics 2023, 11(19), 4199; https://doi.org/10.3390/math11194199 - 8 Oct 2023
Viewed by 997
Abstract
In this paper, a stochastic avian influenza model with two different pathogenic human–avian viruses is studied. The model analyzes the spread of the avian influenza virus from poultry populations to human populations in a random environment. The dynamic behavior of the stochastic avian [...] Read more.
In this paper, a stochastic avian influenza model with two different pathogenic human–avian viruses is studied. The model analyzes the spread of the avian influenza virus from poultry populations to human populations in a random environment. The dynamic behavior of the stochastic avian influenza model is analyzed. Firstly, the existence and uniqueness of a global positive solution are obtained. Secondly, under the condition of high pathogenic virus extinction, the persistence in the mean and extinction of the infected avian population with a low pathogenic virus is analyzed. Thirdly, the sufficient conditions for the existence and uniqueness of the ergodic stationary distribution in the stochastic avian influenza model are derived. We find the threshold of the stochastic model to determine whether the disease spreads when the white noise is small. The analysis results show that random white noise is effective for disease control. Finally, the theoretical results are verified by numerical simulation, and the numerical simulation analysis is carried out for the cases that cannot be theoretically deduced. Full article
Show Figures

Figure 1

17 pages, 769 KiB  
Article
Asymptotic Sample Size for Common Test of Relative Risk Ratios in Stratified Bilateral Data
by Keyi Mou, Zhiming Li and Changxing Ma
Mathematics 2023, 11(19), 4198; https://doi.org/10.3390/math11194198 - 8 Oct 2023
Viewed by 949
Abstract
In medical clinical studies, various tests usually relate to the sample size. This paper proposes several methods to calculate sample sizes for a common test of relative risk ratios in stratified bilateral data. Under the prespecified significant level and power, we derive some [...] Read more.
In medical clinical studies, various tests usually relate to the sample size. This paper proposes several methods to calculate sample sizes for a common test of relative risk ratios in stratified bilateral data. Under the prespecified significant level and power, we derive some explicit formulae and an algorithm of the sample size. The sample sizes of the stratified intra-class model are obtained from the likelihood ratio, score, and Wald-type tests. Under pooled data, we calculate sample size based on the Wald-type test and its log-transformation form. Numerical simulations show that the proposed sample sizes have empirical power close to the prespecified value for given significance levels. The sample sizes from the iterative method are more stable and effective. Full article
(This article belongs to the Section Probability and Statistics)
Show Figures

Figure 1

16 pages, 6384 KiB  
Article
Resultant Normal Contact Force-Based Contact Friction Model for the Combined Finite-Discrete Element Method and Its Validation
by He Liu, Zuliang Shao, Qibin Lin, Yiming Lei, Chenglei Du and Yucong Pan
Mathematics 2023, 11(19), 4197; https://doi.org/10.3390/math11194197 - 8 Oct 2023
Cited by 1 | Viewed by 1381
Abstract
In the conventional FDEM (Combined Finite and Discrete Element Method), each contact pair might have multiple contact points where friction forces are applied, leading to non-unique friction force assignments and potentially introducing computational errors. This study introduces a new contact friction algorithm for [...] Read more.
In the conventional FDEM (Combined Finite and Discrete Element Method), each contact pair might have multiple contact points where friction forces are applied, leading to non-unique friction force assignments and potentially introducing computational errors. This study introduces a new contact friction algorithm for FDEM based on the resultant normal contact force. This method necessitates determining the friction force at a unique equivalent contact point, thereby significantly simplifying the computational flow and reducing memory usage. A series of numerical tests are performed to validate the effectiveness of the proposed contact model. Using collision and block sliding tests, the proposed contact friction model is verified to be able to accurately capture the frictional effect between discrete bodies and circumvent the problematic kinetic energy dissipation issue associated with the original contact friction algorithm. For the Brazilian splitting and uniaxial compression tests, the simulated results closely align with those generated using the original contact friction algorithm and match the experimental measurements well, demonstrating the applicability of the proposed algorithm in fracturing analysis. Furthermore, by using the proposed contact friction algorithm, a computational efficiency enhancement of 8% in contact force evaluation can be achieved. Full article
Show Figures

Figure 1

19 pages, 4500 KiB  
Article
An Evolutionary Game-Theoretic Approach to Unmanned Aerial Vehicle Network Target Assignment in Three-Dimensional Scenarios
by Yifan Gao, Lei Zhang, Chuanyue Wang, Xiaoyuan Zheng and Qianling Wang
Mathematics 2023, 11(19), 4196; https://doi.org/10.3390/math11194196 - 8 Oct 2023
Cited by 4 | Viewed by 1281
Abstract
Target assignment has been a hot topic of research in the academic and industrial communities for swarms of multiple unmanned aerial vehicle (multi-UAVs). Traditional methods mainly focus on cooperative target assignment in planes, and they ignore three-dimensional scenarios for the multi-UAV network target [...] Read more.
Target assignment has been a hot topic of research in the academic and industrial communities for swarms of multiple unmanned aerial vehicle (multi-UAVs). Traditional methods mainly focus on cooperative target assignment in planes, and they ignore three-dimensional scenarios for the multi-UAV network target assignment problem. This paper proposes a method for target assignment in three-dimensional scenarios based on evolutionary game theory to achieve cooperative targeting for multi-UAVs, significantly improving operational efficiency and achieving maximum utility. Firstly, we construct an evolutionary game model including game participants, a tactical strategy space, a payoff matrix, and a strategy selection probability space. Then, a multi-level information fusion algorithm is designed to evaluate the overall attack effectiveness of multi-UAVs against multiple targets. The replicator equation is leveraged to obtain the evolutionarily stable strategy (ESS) and dynamically update the optimal strategy. Finally, a typical scenario analysis and an effectiveness experiment are carried out on the RflySim platform to analyze the calculation process and verify the effectiveness of the proposed method. The results show that the proposed method can effectively provide a target assignment solution for multi-UAVs. Full article
Show Figures

Figure 1

10 pages, 294 KiB  
Article
Spatial Behavior of Solutions in Linear Thermoelasticity with Voids and Three Delay Times
by Manuela Carini and Vittorio Zampoli
Mathematics 2023, 11(19), 4195; https://doi.org/10.3390/math11194195 - 8 Oct 2023
Viewed by 817
Abstract
This brief contribution aims to complement a study of well-posedness started by the same authors in 2020, showing—for that same mathematical model—the existence of a domain of influence of external data. The object of investigation, we recall, is a linear thermoelastic model with [...] Read more.
This brief contribution aims to complement a study of well-posedness started by the same authors in 2020, showing—for that same mathematical model—the existence of a domain of influence of external data. The object of investigation, we recall, is a linear thermoelastic model with a porous matrix modeled on the basis of the Cowin–Nunziato theory, and for which the heat exchange phenomena are intended to obey a time-differential heat transfer law with three delay times. We therefore consider, without reporting it explicitly, the (suitably adapted) initial-boundary value problem formulated at that time, as well as some analytical techniques employed to handle it in order to address the uniqueness and continuous dependence questions. Here, a domain of influence theorem is proven, showing the spatial behavior of the solution in a cylindrical domain, by activating the hypotheses that make the model thermodynamically consistent. The theorem, in detail, demonstrates that for a finite time t>0, the assigned external (thermomechanical) actions generate no disturbance outside a bounded domain contained within the cylindrical region of interest. The length of the work is deliberately kept to a minimum, having opted where possible for bibliographic citations in favor of greater reading fluency. Full article
(This article belongs to the Section Mathematical Physics)
17 pages, 9119 KiB  
Article
Learning Wasserstein Contrastive Color Histogram Representation for Low-Light Image Enhancement
by Zixuan Sun, Shenglong Hu, Huihui Song and Peng Liang
Mathematics 2023, 11(19), 4194; https://doi.org/10.3390/math11194194 - 8 Oct 2023
Cited by 2 | Viewed by 1315
Abstract
The goal of low-light image enhancement (LLIE) is to enhance perception to restore normal-light images. The primary emphasis of earlier LLIE methods was on enhancing the illumination while paying less attention to the color distortions and noise in the dark. In comparison to [...] Read more.
The goal of low-light image enhancement (LLIE) is to enhance perception to restore normal-light images. The primary emphasis of earlier LLIE methods was on enhancing the illumination while paying less attention to the color distortions and noise in the dark. In comparison to the ground truth, the restored images frequently exhibit inconsistent color and residual noise. To this end, this paper introduces a Wasserstein contrastive regularization method (WCR) for LLIE. The WCR regularizes the color histogram (CH) representation of the restored image to keep its color consistency while removing noise. Specifically, the WCR contains two novel designs including a differentiable CH module (DCHM) and a WCR loss. The DCHM serves as a modular component that can be easily integrated into the network to enable end-to-end learning of the image CH. Afterwards, to ensure color consistency, we utilize the Wasserstein distance (WD) to quantify the resemblance of the learnable CHs between the restored image and the normal-light image. Then, the regularized WD is used to construct the WCR loss, which is a triplet loss and takes the normal-light images as positive samples, the low-light images as negative samples, and the restored images as anchor samples. The WCR loss pulls the anchor samples closer to the positive samples and simultaneously pushes them away from the negative samples so as to help the anchors remove the noise in the dark. Notably, the proposed WCR method was only used for training, and was shown to achieve high performance and high speed inference using lightweight networks. Therefore, it is valuable for real-time applications such as night automatic driving and night reversing image enhancement. Extensive evaluations on benchmark datasets such as LOL, FiveK, and UIEB showed that the proposed WCR method achieves superior performance, outperforming existing state-of-the-art methods. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop