Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (79)

Search Parameters:
Keywords = lognormal density distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 495 KB  
Article
Slomads Rising: Structural Shifts in U.S. Airbnb Stay Lengths During and After the Pandemic (2019–2024)
by Harrison Katz and Erica Savage
Tour. Hosp. 2025, 6(4), 182; https://doi.org/10.3390/tourhosp6040182 - 17 Sep 2025
Viewed by 389
Abstract
Background. Length of stay, operationalized here as nights per booking (NPB), is a first-order driver of yield, labor planning, and environmental pressure. The COVID-19 pandemic and the rise of long-stay remote workers (often labeled “slomads”, a slow-travel subset of digital nomads) plausibly altered [...] Read more.
Background. Length of stay, operationalized here as nights per booking (NPB), is a first-order driver of yield, labor planning, and environmental pressure. The COVID-19 pandemic and the rise of long-stay remote workers (often labeled “slomads”, a slow-travel subset of digital nomads) plausibly altered stay-length distributions, yet national, booking-weighted evidence for the United States remains scarce. Purpose. This study quantifies COVID-19 pandemic-era and post-pandemic shifts in U.S. Airbnb stay lengths, and identifies whether higher averages reflect (i) more long stays or (ii) longer long stays. Methods. Using every U.S. Airbnb reservation created between 1 January 2019 and 31 December 2024 (collapsed to booking-count weights), the analysis combines: weighted descriptive statistics; parametric density fitting (Gamma, log-normal, Poisson–lognormal); weighted negative-binomial regression with month effects; a two-part (logit + NB) model for ≥28-night stays; and a monthly SARIMA(0,1,1)(0,1,1)12 with COVID-19 pandemic-phase indicators. Results. Mean NPB rose from 3.68 pre-COVID-19 to 4.36 during restrictions and then stabilized near 4.07 post-2021 (≈10% above 2019); the booking-weighted median shifted permanently from 2 to 3 nights. A two-parameter log-normal fits best by wide AIC/BIC margins, consistent with a heavy-tailed distribution. Negative-binomial estimates imply post-vaccine bookings are 6.5% shorter than restriction-era bookings, while pre-pandemic bookings are 16% shorter. In a two-part (threshold) model at 28 nights, the booking share of month-plus stays rose from 1.43% (pre) to 2.72% (restriction) and settled at 2.04% (post), whereas the conditional mean among long stays was in the mid-to-high 50 s (≈55–60 nights) and varied modestly across phases. Hence, a higher average NPB is driven primarily by a greater prevalence of month-plus bookings. A seasonal ARIMA model with pandemic-phase dummies improves fit over a dummy-free specification (likelihood-ratio = 8.39, df = 2, p = 0.015), indicating a structural level shift rather than higher-order dynamics. Contributions. The paper provides national-scale, booking-weighted evidence that U.S. short-term-rental stays became durably longer and more heavy-tailed after 2020, filling a gap in the tourism and revenue-management literature. Implications. Heavy-tailed pricing and inventory policies, and explicit regime indicators in forecasting, are recommended for practitioners; destination policy should reflect the larger month-plus segment. Full article
Show Figures

Figure 1

24 pages, 7349 KB  
Article
Return Level Prediction with a New Mixture Extreme Value Model
by Emrah Altun, Hana N. Alqifari and Kadir Söyler
Mathematics 2025, 13(17), 2705; https://doi.org/10.3390/math13172705 - 22 Aug 2025
Viewed by 374
Abstract
The generalized Pareto distribution is frequently used for modeling extreme values above an appropriate threshold level. Since the process of determining the appropriate threshold value is difficult, a mixture of extreme value models rises to prominence. In this study, mixture extreme value models [...] Read more.
The generalized Pareto distribution is frequently used for modeling extreme values above an appropriate threshold level. Since the process of determining the appropriate threshold value is difficult, a mixture of extreme value models rises to prominence. In this study, mixture extreme value models based on exponentiated Pareto distribution are proposed. The Weibull, gamma, and log-normal models are used as bulk densities. The parameter estimates of the proposed models are obtained using the maximum likelihood approach. Two different approaches based on maximization of the log-likelihood and Kolmogorov–Smirnov p-value are used to determine the appropriate threshold value. The effectiveness of these methods is compared using simulation studies. The proposed models are compared with other mixture models through an application study on earthquake data. The GammaEP web application is developed to ensure the reproducibility of the results and the usability of the proposed model. Full article
(This article belongs to the Special Issue Mathematical Modelling and Applied Statistics)
Show Figures

Figure 1

24 pages, 8202 KB  
Article
Study on the Empirical Probability Distribution Model of Soil Factors Influencing Seismic Liquefaction
by Zhengquan Yang, Meng Fan, Jingjun Li, Xiaosheng Liu, Jianming Zhao and Hui Yang
Buildings 2025, 15(16), 2861; https://doi.org/10.3390/buildings15162861 - 13 Aug 2025
Viewed by 339
Abstract
One of the important tasks in sand liquefaction assessment is to evaluate the likelihood of soil liquefaction. However, most liquefaction assessment methods are deterministic for influencing factors and fail to calculate the liquefaction probability by systematically considering the probability distributions of soil factors. [...] Read more.
One of the important tasks in sand liquefaction assessment is to evaluate the likelihood of soil liquefaction. However, most liquefaction assessment methods are deterministic for influencing factors and fail to calculate the liquefaction probability by systematically considering the probability distributions of soil factors. Based on field liquefaction investigation cases, probability distribution fitting and a hypothesis test were carried out. For the variables that failed to pass the fitting and test, the kernel density estimation was conducted. Methods for calculating the liquefaction probability using a Monte Carlo simulation with the probability distribution were then proposed. The results indicated that for (N1)60, SM, S, and GM followed a Gaussian distribution, while CL and ML followed a lognormal distribution; for FC, SM and GM followed a lognormal distribution; and for d50, ML and S followed a Gaussian and lognormal distribution, respectively. The other factors’ distribution curves can be calculated by kernel density estimation. It is feasible to calculate the liquefaction probability based on a Monte Carlo simulation of the variable distribution. The result of the liquefaction probability calculation in this case was similar to that of the existing probability model and was consistent with actual observations. Regional sample differences were considered by introducing the normal distribution error term, and the liquefaction probability accuracy could be improved to a certain extent. The liquefaction probability at a specific seismic level or the total probability within a certain period in the future can be calculated with the method proposed in this paper. It provides a data-driven basis for realistically estimating the likelihood of soil liquefaction under seismic loading and contributes to site classification, liquefaction potential zoning, and ground improvements in seismic design decisions. The practical value of seismic hazard mapping and performance-based design in earthquake-prone regions was also demonstrated. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

23 pages, 5135 KB  
Article
Strategic Multi-Stage Optimization for Asset Investment in Electricity Distribution Networks Under Load Forecasting Uncertainties
by Clainer Bravin Donadel
Eng 2025, 6(8), 186; https://doi.org/10.3390/eng6080186 - 5 Aug 2025
Viewed by 377
Abstract
Electricity distribution systems face increasing challenges due to demand growth, regulatory requirements, and the integration of distributed generation. In this context, distribution companies must make strategic and well-supported investment decisions, particularly in asset reinforcement actions such as reconductoring. This paper presents a multi-stage [...] Read more.
Electricity distribution systems face increasing challenges due to demand growth, regulatory requirements, and the integration of distributed generation. In this context, distribution companies must make strategic and well-supported investment decisions, particularly in asset reinforcement actions such as reconductoring. This paper presents a multi-stage methodology to optimize reconductoring investments under load forecasting uncertainties. The approach combines a decomposition strategy with Monte Carlo simulation to capture demand variability. By discretizing a lognormal probability density function and selecting the largest loads in the network, the methodology balances computational feasibility with modeling accuracy. The optimization model employs exhaustive search techniques independently for each network branch, ensuring precise and consistent investment decisions. Tests conducted on the IEEE 123-bus feeder consider both operational and regulatory constraints from the Brazilian context. Results show that uncertainty-aware planning leads to a narrow investment range—between USD 55,108 and USD 66,504—highlighting the necessity of reconductoring regardless of demand scenarios. A comparative analysis of representative cases reveals consistent interventions, changes in conductor selection, and schedule adjustments based on load conditions. The proposed methodology enables flexible, cost-effective, and regulation-compliant investment planning, offering valuable insights for utilities seeking to enhance network reliability and performance while managing demand uncertainties. Full article
(This article belongs to the Section Electrical and Electronic Engineering)
Show Figures

Figure 1

23 pages, 8911 KB  
Article
Porosity Analysis and Thermal Conductivity Prediction of Non-Autoclaved Aerated Concrete Using Convolutional Neural Network and Numerical Modeling
by Alexey N. Beskopylny, Evgenii M. Shcherban’, Sergey A. Stel’makh, Diana Elshaeva, Andrei Chernil’nik, Irina Razveeva, Ivan Panfilov, Alexey Kozhakin, Emrah Madenci, Ceyhun Aksoylu and Yasin Onuralp Özkılıç
Buildings 2025, 15(14), 2442; https://doi.org/10.3390/buildings15142442 - 11 Jul 2025
Viewed by 517
Abstract
Currently, the visual study of the structure of building materials and products is gradually supplemented by intelligent algorithms based on computer vision technologies. These algorithms are powerful tools for the visual diagnostic analysis of materials and are of great importance in analyzing the [...] Read more.
Currently, the visual study of the structure of building materials and products is gradually supplemented by intelligent algorithms based on computer vision technologies. These algorithms are powerful tools for the visual diagnostic analysis of materials and are of great importance in analyzing the quality of production processes and predicting their mechanical properties. This paper considers the process of analyzing the visual structure of non-autoclaved aerated concrete products, namely their porosity, using the YOLOv11 convolutional neural network, with a subsequent prediction of one of the most important properties—thermal conductivity. The object of this study is a database of images of aerated concrete samples obtained under laboratory conditions and under the same photography conditions, supplemented by using the author’s augmentation algorithm (up to 100 photographs). The results of the porosity analysis, obtained in the form of a log-normal distribution of pore sizes, show that the developed computer vision model has a high accuracy of analyzing the porous structure of the material under study: Precision = 0.86 and Recall = 0.88 for detection; precision = 0.86 and recall = 0.91 for segmentation. The Hellinger and Kolmogorov–Smirnov statistical criteria, for determining the belonging of the real distribution and the one obtained using the intelligent algorithm to the same general population show high significance. Subsequent modeling of the material using the ANSYS 2024 R2 Material Designer module, taking into account the stochastic nature of the pore size, allowed us to predict the main characteristics—thermal conductivity and density. Comparison of the predicted results with real data showed an error less than 7%. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

26 pages, 2368 KB  
Article
Connectivity Analysis in VANETS with Dynamic Ranges
by Kenneth Okello, Elijah Mwangi and Ahmed H. Abd El-Malek
Telecom 2025, 6(2), 33; https://doi.org/10.3390/telecom6020033 - 21 May 2025
Viewed by 578
Abstract
Vehicular Ad Hoc Networks (VANETs) serve as critical platforms for inter-vehicle communication within constrained ranges, facilitating information exchange. However, the inherent challenge of dynamic network topology poses persistent disruptions, hindering safety and emergency information exchange. An alternative generalised statistical model of the channel [...] Read more.
Vehicular Ad Hoc Networks (VANETs) serve as critical platforms for inter-vehicle communication within constrained ranges, facilitating information exchange. However, the inherent challenge of dynamic network topology poses persistent disruptions, hindering safety and emergency information exchange. An alternative generalised statistical model of the channel is proposed to capture the varying transmission range of the vehicle node. The generalised model framework uses simple wireless fading channel models (Weibull, Nakagami-m, Rayleigh, and lognormal) and the large vehicle obstructions to model the transmission range. This approach simplifies analysis of connection of vehicular nodes in environments were communication links are very unstable from obstructions from large vehicles and varying speeds. The connectivity probability is computed for two traffic models—free-flow and synchronized Gaussian unitary ensemble (GUE)—to simulate vehicle dynamics within a multi-lane road, enhancing the accuracy of VANET modeling. Results show that indeed the dynamic range distribution is impacted at shorter inter-vehicle distances and vehicle connectivity probability is lower with many obstructing vehicles. These findings offer valuable insights into the overall effects of parameters like path loss exponents and vehicle density on connectivity probability, thus providing knowledge on optimizing VANETs in diverse traffic scenarios. Full article
(This article belongs to the Special Issue Performance Criteria for Advanced Wireless Communications)
Show Figures

Figure 1

24 pages, 1566 KB  
Article
Finite Mixture Models: A Key Tool for Reliability Analyses
by Marko Nagode, Simon Oman, Jernej Klemenc and Branislav Panić
Mathematics 2025, 13(10), 1605; https://doi.org/10.3390/math13101605 - 14 May 2025
Viewed by 633
Abstract
As system complexity increases, accurately capturing true system reliability becomes increasingly challenging. Rather than relying on exact analytical solutions, it is often more practical to use approximations based on observed time-to-failure data. Finite mixture models provide a flexible framework for approximating arbitrary probability [...] Read more.
As system complexity increases, accurately capturing true system reliability becomes increasingly challenging. Rather than relying on exact analytical solutions, it is often more practical to use approximations based on observed time-to-failure data. Finite mixture models provide a flexible framework for approximating arbitrary probability density functions and are well suited for reliability modelling. A critical factor in achieving accurate approximations is the choice of parameter estimation algorithm. The REBMIX&EM algorithm, implemented in the rebmix R package, generally performs well but struggles when components of the finite mixture model overlap. To address this issue, we revisit key steps of the REBMIX algorithm and propose improvements. With these improvements, we derive parameter estimators for finite mixture models based on three parametric families commonly applied in reliability analysis: lognormal, gamma, and Weibull. We conduct a comprehensive simulation study across four system configurations, using lognormal, gamma, and Weibull distributions with varying parameters as system component time-to-failure distributions. Performance is benchmarked against five widely used R packages for finite mixture modelling. The results confirm that our proposal improves both estimation accuracy and computational efficiency, consistently outperforming existing packages. We also demonstrate that finite mixture models can approximate analytical reliability solutions with fewer components than the actual number of system components. Our proposals are also validated using a practical example from Backblaze hard drive data. All improvements are included in the open-source rebmix R package, with complete source code provided to support the broader adoption of the R programming language in reliability analysis. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

32 pages, 1098 KB  
Article
Estimation and Bayesian Prediction for New Version of Xgamma Distribution Under Progressive Type-II Censoring
by Ahmed R. El-Saeed, Molay Kumar Ruidas and Ahlam H. Tolba
Symmetry 2025, 17(3), 457; https://doi.org/10.3390/sym17030457 - 18 Mar 2025
Cited by 1 | Viewed by 379
Abstract
This article introduces a new continuous lifetime distribution within the Gamma family, called the induced Xgamma distribution, and explores its various statistical properties. The proposed distribution’s estimation and prediction are investigated using Bayesian and non-Bayesian approaches under progressively Type-II censored data. The maximum [...] Read more.
This article introduces a new continuous lifetime distribution within the Gamma family, called the induced Xgamma distribution, and explores its various statistical properties. The proposed distribution’s estimation and prediction are investigated using Bayesian and non-Bayesian approaches under progressively Type-II censored data. The maximum likelihood and maximum product spacing methods are applied for the non-Bayesian approach, and some of their performances are evaluated. In the Bayesian framework, the numerical approximation technique utilizing the Metropolis–Hastings algorithm within the Markov chain Monte Carlo is employed under different loss functions, including the squared error loss, general entropy, and LINEX loss. Interval estimation methods, such as asymptotic confidence intervals, log-normal asymptotic confidence intervals, and highest posterior density intervals, are also developed. A comprehensive numerical study using Monte Carlo simulations is conducted to evaluate the performance of the proposed point and interval estimation methods through progressive Type-II censored data. Furthermore, the applicability and effectiveness of the proposed distribution are demonstrated through three real-world datasets from the fields of medicine and engineering. Full article
(This article belongs to the Special Issue Bayesian Statistical Methods for Forecasting)
Show Figures

Figure 1

20 pages, 4857 KB  
Article
From Battlefield to Building Site: Probabilistic Analysis of UXO Penetration Depth for Infrastructure Resilience
by Boules N. Morkos, Magued Iskander, Mehdi Omidvar and Stephan Bless
Appl. Sci. 2025, 15(6), 3259; https://doi.org/10.3390/app15063259 - 17 Mar 2025
Cited by 1 | Viewed by 553
Abstract
Remediation of formerly used war zones requires knowledge of the depth of burial (DoB) of unexploded ordnances (UXOs). The DoB can vary greatly depending on soil and ballistic conditions, and their associated uncertainties. In this study, the well-known physics-based Poncelet equation is used [...] Read more.
Remediation of formerly used war zones requires knowledge of the depth of burial (DoB) of unexploded ordnances (UXOs). The DoB can vary greatly depending on soil and ballistic conditions, and their associated uncertainties. In this study, the well-known physics-based Poncelet equation is used to set a framework for stochastic prediction of the DoB of munitions in sandy, clayey sand, and clayey sediments using Monte Carlo simulations (MCSs). First, the coefficients of variation (COVs) of the empirical parameters affecting the model were computed, for the first time, from published experimental data. Second, the behavior of both normal and lognormal distributions was investigated and it was found that both distributions yielded comparable DoB predictions for COVs below 30%. However, a lognormal distribution was preferred, to avoid negative value sampling, since COVs of the studied parameters can easily exceed this threshold. Third, the performance of several MCS sampling techniques, including the Pseudorandom Generator (PRG), Latin Hypercube Sampling (LHS), and Gaussian Process Response Surface Method (GP_RSM), in predicting the DOB was explored. Different probabilistic sampling techniques produced similar DoB predictions for each soil type, but GP_RSM was the most computationally efficient method. Finally, a sensitivity analysis was conducted to determine the contribution of each random variable to the predicted DoB. Uncertainty of the density, drag coefficient, and bearing coefficient dominated the DoB in sandy soil, while uncertainty in the bearing coefficient controlled DoB in clayey sand soils. In clayey soil, all variables under various distribution conditions resulted in approximately identical predictions, with no single variable appearing to be dominant. It is recommended that Monte Carlo simulations using GP_RSM sampling from lognormally distributed effective variables be used for predicting DoB in soils with high COVs. Full article
(This article belongs to the Special Issue Infrastructure Resilience Analysis)
Show Figures

Figure 1

14 pages, 6461 KB  
Article
The Application of a Joint Distribution of Significant Wave Heights and Peak Wave Periods in the Northwestern South China Sea
by Gongpeng Liu, Qunan Ouyang, Zhanyuan He and Na Zhang
J. Mar. Sci. Eng. 2025, 13(3), 570; https://doi.org/10.3390/jmse13030570 - 14 Mar 2025
Viewed by 793
Abstract
A joint distribution of significant wave heights (Hs) and peak wave periods (Tp) in the northwestern South China Sea is created using a conditional distribution model in this work. An unstructured triangular mesh wave model covering the [...] Read more.
A joint distribution of significant wave heights (Hs) and peak wave periods (Tp) in the northwestern South China Sea is created using a conditional distribution model in this work. An unstructured triangular mesh wave model covering the northwestern South China Sea is established based on the third-generation spectral wave model SWAN. This wave model has been extensively validated against field data and was run from 1979 to 2020 to generate long enough one-hourly Hs and Tp. Four probability density functions including Normal, Lognormal, Gamma and 3P Weibull distributions are adopted to construct the marginal independent distribution of Hs. The results show that the 3P Weibull distribution is more suitable in fitting the marginal distribution of Hs compared to the other three distributions. Three combinations of dependence functions (μ and σ), namely, power3 and exp3, insquare2 and asymdecrease3, and logistics4 and alpha3, are used to create the Normal and Lognormal distributions for Tp. The estimations of dependence functions and corresponding fitted Tp demonstrate that the μ and σ using power3 and exp3 perform best in producing the conditional distribution of Tp. In addition, the environmental contours derived by an IFORM are used to generate the extreme sea states with return periods of 1, 5, 10, 25, 50 and 100 years. Full article
Show Figures

Figure 1

26 pages, 42656 KB  
Article
Recognizing Mixing Patterns of Urban Agglomeration Based on Complex Network Assortativity Coefficient: A Case Study in China
by Kaiqi Zhang, Lujin Jia and Sheng Xu
Appl. Sci. 2025, 15(4), 2024; https://doi.org/10.3390/app15042024 - 14 Feb 2025
Viewed by 1075
Abstract
Understanding mixing patterns in urban networks is crucial for exploring the connectivity relationships between nodes and revealing the connection tendencies. Based on multi-source data (Baidu index data, investment data of listed companies, high-speed rail operation data, and highway network data) from 2017 to [...] Read more.
Understanding mixing patterns in urban networks is crucial for exploring the connectivity relationships between nodes and revealing the connection tendencies. Based on multi-source data (Baidu index data, investment data of listed companies, high-speed rail operation data, and highway network data) from 2017 to 2019 across seven national-level urban agglomerations, this study introduces complex network assortativity coefficients to analyze the mechanisms of urban relationship formation from two dimensions, structural features and socioeconomic attributes, to evaluate how these features shape urban agglomeration networks and reveal the distribution of network assortativity coefficients across urban agglomerations to classify diverse developmental patterns. The results show that the sampled cities exhibit heterogeneous characteristics following a stretched exponential distribution in urban structural features and a log-normal distribution in socioeconomic attributes, demonstrating significant resource mixing patterns. Different types of urban agglomeration networks display distinct assortativity characteristics. Information network mixing patterns within urban agglomerations are insignificant; investment relationships, high-speed rail, and highway networks demonstrate significant centripetal mixing patterns. The assortativity coefficients of urban agglomerations follow a unified general probability density distribution, suggesting that urban agglomerations objectively tend toward centripetal agglomeration. Full article
(This article belongs to the Special Issue Spatial Data and Technology Applications)
Show Figures

Figure 1

19 pages, 2810 KB  
Article
Effect of Optical Aberrations on Laser Transmission Performance in Maritime Atmosphere Turbulence
by Jiabao Peng, Yaqian Li, Zhangjun Wang, Chao Chen and Tao Zhu
Photonics 2025, 12(2), 140; https://doi.org/10.3390/photonics12020140 - 10 Feb 2025
Cited by 1 | Viewed by 1172
Abstract
Focusing on the three critical factors influencing laser communication systems operating in marine environments: atmospheric turbulence disturbances, atmospheric attenuation, and optical aberration effects, in this paper, we employ numerical simulation methods to systematically investigate the influence of four typical Zernike aberrations (defocus, y-coma, [...] Read more.
Focusing on the three critical factors influencing laser communication systems operating in marine environments: atmospheric turbulence disturbances, atmospheric attenuation, and optical aberration effects, in this paper, we employ numerical simulation methods to systematically investigate the influence of four typical Zernike aberrations (defocus, y-coma, spherical aberration, and y-secondary quadrupole) on laser atmospheric transmission characteristics and system bit error rates. A comparison of their atmospheric transmission performance with that of the aberration-free state is also presented. The results show that reducing turbulence strength or increasing receiver aperture radius can effectively mitigate the scintillation effect of intensity fluctuations. Among the four typical aberrations, the fluctuation range of the relative change rate of the scintillation index for y-coma aberration relative to the aberration-free state is the largest. In weak turbulence and short-distance laser transmission over the sea, the beam drift caused by these four aberrations is not significant, and stronger turbulence strength or higher weight coefficients lead to more severe beam expansion. The on-axis logarithmic intensity probability density distribution of laser beams with different aberrations approximately follows a log-normal distribution. The skewness (S) and kurtosis (K) of the logarithmic intensity distribution are negatively correlated and always satisfy S < 0 and K > 0. Additionally, we found that as turbulence strength increases, turbulence effects significantly raise the required signal-to-noise ratio (SNR) values to achieve a bit error rate of 10−9. When turbulence strength reaches a certain level, the impact weights of different aberrations on system performance may undergo changes. These results can provide theoretical references for the design and optimization of laser system parameters in marine laser communication. Full article
(This article belongs to the Special Issue Optical Light Propagation and Communication Through Turbulent Medium)
Show Figures

Figure 1

23 pages, 11956 KB  
Article
Interpretable Machine Learning Insights into the Factors Influencing Residents’ Travel Distance Distribution
by Rui Si, Yaoyu Lin, Dongquan Yang and Qijin Guo
ISPRS Int. J. Geo-Inf. 2025, 14(1), 39; https://doi.org/10.3390/ijgi14010039 - 20 Jan 2025
Cited by 2 | Viewed by 1878
Abstract
Understanding intra-urban travel patterns through quantitative analysis is crucial for effective urban planning and transportation management. In previous studies, a range of distribution functions were modeled to lay the groundwork for human mobility research. However, few studies have explored the nonlinear relationships between [...] Read more.
Understanding intra-urban travel patterns through quantitative analysis is crucial for effective urban planning and transportation management. In previous studies, a range of distribution functions were modeled to lay the groundwork for human mobility research. However, few studies have explored the nonlinear relationships between travel distance patterns and environmental factors. Using travel distance data from ride-hailing services, this research divides a study area into 1 × 1 km grid cells, modeling the best travel distance distribution and calculating the coefficients of each grid. A machine learning framework (Extreme Gradient Boosting combined with Shapley Additive Explanations) is introduced to interpret the factors influencing these distributions. Our results emphasize that the travel distance of human movement tends to follow a log-normal distribution and exhibits spatial heterogeneity. Key factors affecting travel distance distributions include the distance to the city center, bus station density, land use entropy, and the density of companies. Most environmental variables exhibit nonlinear and threshold effects on the log-normal distribution coefficients. These findings significantly advance our understanding of ride-hailing travel patterns and offer valuable insights into the spatial dynamics of human mobility. Full article
Show Figures

Figure 1

10 pages, 4044 KB  
Article
Modelling the Impact of Argon Atoms on a WO3 Surface by Molecular Dynamics Simulations
by Shokirbek Shermukhamedov, Thana Maihom and Michael Probst
Molecules 2024, 29(24), 5928; https://doi.org/10.3390/molecules29245928 - 16 Dec 2024
Cited by 2 | Viewed by 990
Abstract
Machine learning potential energy functions can drive the atomistic dynamics of molecules, clusters, and condensed phases. They are amongst the first examples that showed how quantum mechanics together with machine learning can predict chemical reactions as well as material properties and even lead [...] Read more.
Machine learning potential energy functions can drive the atomistic dynamics of molecules, clusters, and condensed phases. They are amongst the first examples that showed how quantum mechanics together with machine learning can predict chemical reactions as well as material properties and even lead to new materials. In this work, we study the behaviour of tungsten trioxide (WO3) surfaces upon particle impact by employing potential energy surfaces represented by neural networks. Besides being omnipresent on tungsten surfaces exposed to air, WO3 plays an important role in nuclear fusion experiments due to the preferred use of tungsten for plasma-facing components. In this instance, the formation of WO3 is caused by the omnipresent traces of oxygen. WO3 becomes a plasma-facing material, but its properties, especially concerning degradation, have hardly been studied. We employ molecular dynamics simulations to investigate sputtering, reflection, and adsorption phenomena occurring on WO3 surfaces irradiated with Argon. The machine-learned potential energy function underlying the MD simulations is modelled using a neural network (NNP) trained from large sets of density functional theory calculations by means of the Behler–Parrinello method. The analysis focuses on sputtering yields for both oxygen and tungsten (W), for various incident energies and impact angles. An increase in Ar incident energy increases the sputtering yield of oxygen, with distinct features observed in different energy ranges. The sputtering yields of tungsten remain exceedingly low, even compared to pristine W surfaces. The ratios between the reflection, adsorption, and retention of the Ar atoms have been analyzed on their dependence of impact energy and incident end angles. We find that the energy spectrum of sputtered oxygen atoms follows a lognormal distribution and offers information about surface binding energies on the WO3 surface. Full article
(This article belongs to the Section Computational and Theoretical Chemistry)
Show Figures

Graphical abstract

11 pages, 489 KB  
Article
The Lognormal Distribution Is Characterized by Its Integer Moments
by Pier Luigi Novi Inverardi and Aldo Tagliani
Mathematics 2024, 12(23), 3830; https://doi.org/10.3390/math12233830 - 4 Dec 2024
Cited by 1 | Viewed by 1145
Abstract
The lognormal moment sequence is considered. Using the fractional moments technique, it is first proved that the lognormal has the largest differential entropy among the infinite positively supported probability densities with the same lognormal-moments. Then, relying on previous theoretical results on entropy convergence [...] Read more.
The lognormal moment sequence is considered. Using the fractional moments technique, it is first proved that the lognormal has the largest differential entropy among the infinite positively supported probability densities with the same lognormal-moments. Then, relying on previous theoretical results on entropy convergence obtained by the authors concerning the indeterminate Stieltjes moment problem, the lognormal distribution is accurately reconstructed by the maximum entropy technique using only its integer moment sequence, although it is not uniquely determined by moments. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

Back to TopTop