Next Issue
Volume 2, March
 
 

Stats, Volume 1, Issue 1 (December 2018) – 14 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 369 KiB  
Article
Statistical Inference for Progressive Stress Accelerated Life Testing with Birnbaum-Saunders Distribution
by Naijun Sha
Stats 2018, 1(1), 189-203; https://doi.org/10.3390/stats1010014 - 12 Dec 2018
Cited by 7 | Viewed by 2017
Abstract
As a result of the two-parameter Birnbaum–Saunders (BS) distribution being successful in modelling fatigue failure times, several extensions of this model have been explored from different aspects. In this article, we consider a progressive stress accelerated life testing for the BS model to [...] Read more.
As a result of the two-parameter Birnbaum–Saunders (BS) distribution being successful in modelling fatigue failure times, several extensions of this model have been explored from different aspects. In this article, we consider a progressive stress accelerated life testing for the BS model to introduce a generalized Birnbaum–Saunders (we call it Type-II GBS) distribution on the lifetime of products in the test. We outline some interesting properties of this highly flexible distribution, present the Fisher’s information in the maximum likelihood estimation method, and propose a new Bayesian approach for inference. Simulation studies are carried out to assess the performance of the methods under various settings of parameter values and sample sizes. Real data are analyzed for illustrative purposes to demonstrate the efficiency and accuracy of the proposed Bayesian method over the likelihood-based procedure. Full article
Show Figures

Figure 1

13 pages, 323 KiB  
Article
A Non-Mixture Cure Model for Right-Censored Data with Fréchet Distribution
by Durga H. Kutal and Lianfen Qian
Stats 2018, 1(1), 176-188; https://doi.org/10.3390/stats1010013 - 15 Nov 2018
Cited by 6 | Viewed by 3657
Abstract
This paper considers a non-mixture cure model for right-censored data. It utilizes the maximum likelihood method to estimate model parameters in the non-mixture cure model. The simulation study is based on Fréchet susceptible distribution to evaluate the performance of the method. Compared with [...] Read more.
This paper considers a non-mixture cure model for right-censored data. It utilizes the maximum likelihood method to estimate model parameters in the non-mixture cure model. The simulation study is based on Fréchet susceptible distribution to evaluate the performance of the method. Compared with Weibull and exponentiated exponential distributions, the non-mixture Fréchet distribution is shown to be the best in modeling a real data on allogeneic marrow HLA-matched donors and ECOG phase III clinical trial e1684 data. Full article
Show Figures

Figure 1

7 pages, 211 KiB  
Article
Decreasing Respondent Heterogeneity by Likert Scales Adjustment via Multipoles
by Stan Lipovetsky and Michael Conklin
Stats 2018, 1(1), 169-175; https://doi.org/10.3390/stats1010012 - 14 Nov 2018
Cited by 8 | Viewed by 2621
Abstract
A description of Likert scales can be given using the multipoles technique known in quantum physics and applied to behavioral sciences data. This paper considers decomposition of Likert scales by the multipoles for the application of decreasing the respondents’ heterogeneity. Due to cultural [...] Read more.
A description of Likert scales can be given using the multipoles technique known in quantum physics and applied to behavioral sciences data. This paper considers decomposition of Likert scales by the multipoles for the application of decreasing the respondents’ heterogeneity. Due to cultural and language differences, different respondents habitually use the lower end, the mid-scale, or the upper end of the Likert scales which can lead to distortion and inconsistency in data across respondents. A big impact of different kinds of respondent is well known, for instance, in international studies, and it is called the problem of high and low raters. Application of a multipoles technique to the row-data smoothing via prediction of individual rates by the histogram of the Likert scale tiers produces better results than standard row-centering in data. A numerical example by marketing research data shows that the results are encouraging: while a standard row-centering produces a poor outcome, the dipole-adjustment noticeably improves the obtained segmentation results. Full article
14 pages, 2716 KiB  
Article
A New Signal Processing Approach for Discrimination of EEG Recordings
by Hossein Hassani, Mohammad Reza Yeganegi and Emmanuel Sirimal Silva
Stats 2018, 1(1), 155-168; https://doi.org/10.3390/stats1010011 - 08 Nov 2018
Cited by 6 | Viewed by 2619
Abstract
Classifying brain activities based on electroencephalogram (EEG) signals is one of the important applications of time series discriminant analysis for diagnosing brain disorders. In this paper, we introduce a new method based on the Singular Spectrum Analysis (SSA) technique for classifying brain activity [...] Read more.
Classifying brain activities based on electroencephalogram (EEG) signals is one of the important applications of time series discriminant analysis for diagnosing brain disorders. In this paper, we introduce a new method based on the Singular Spectrum Analysis (SSA) technique for classifying brain activity based on EEG signals via an application into a benchmark dataset for epileptic study with five categories, consisting of 100 EEG recordings per category. The results from the SSA based approach are compared with those from discrete wavelet transform before proposing a hybrid SSA and principal component analysis based approach for improving accuracy levels further. Full article
Show Figures

Figure 1

21 pages, 731 KiB  
Article
Causality between Oil Prices and Tourist Arrivals
by Xu Huang, Emmanuel Silva and Hossein Hassani
Stats 2018, 1(1), 134-154; https://doi.org/10.3390/stats1010010 - 20 Oct 2018
Cited by 4 | Viewed by 3393
Abstract
This paper investigates the causal relationship between oil price and tourist arrivals to further explain the impact of oil price volatility on tourism-related economic activities. The analysis itself considers the time domain, frequency domain, and information theory domain perspectives. Data relating to the [...] Read more.
This paper investigates the causal relationship between oil price and tourist arrivals to further explain the impact of oil price volatility on tourism-related economic activities. The analysis itself considers the time domain, frequency domain, and information theory domain perspectives. Data relating to the US and nine European countries are exploited in this paper with causality tests which include the time domain, frequency domain, and Convergent Cross Mapping (CCM). The CCM approach is nonparametric and therefore not restricted by assumptions. We contribute to existing research through the successful and introductory application of an advanced method and via the uncovering of significant causal links from oil prices to tourist arrivals. Full article
Show Figures

Figure 1

22 pages, 3741 KiB  
Article
Building W Matrices Using Selected Geostatistical Tools: Empirical Examination and Application
by Elżbieta Antczak
Stats 2018, 1(1), 112-133; https://doi.org/10.3390/stats1010009 - 29 Sep 2018
Cited by 8 | Viewed by 2800
Abstract
This paper investigates how to determine the values (elements) of spatial weights in a spatial matrix (W) endogenously from the data. To achieve this goal, geostatistical tools (standard deviation ellipsis, semivariograms, semivariogram clouds, and surface trend models) were used. Then, in [...] Read more.
This paper investigates how to determine the values (elements) of spatial weights in a spatial matrix (W) endogenously from the data. To achieve this goal, geostatistical tools (standard deviation ellipsis, semivariograms, semivariogram clouds, and surface trend models) were used. Then, in the econometric part of the analysis, the effect of applying different variants of matrices was examined. The study was conducted on a sample of 279 Polish towns from 2005–2015. Variables were related to the quantity of produced waste and economic development. Both exploratory spatial data analysis and estimations of spatial panel and seemingly unrelated regression models were performed by including particular W matrices in the study (exogenous-random as well as distance and directional matrices constructed based on data). The results indicated that (1) geostatistical tools can be effectively used to build Ws; (2) outcomes of applying different matrices did not exclude but supplemented one another, although the differences were significant; (3) the most precise picture of spatial dependence was achieved by including distance matrices; and (4) the values of the assessed parameter at the regressors did not significantly change, although there was a change in the strength of the spatial dependency. Full article
Show Figures

Figure 1

14 pages, 247 KiB  
Review
Recent Extensions to the Cochran–Mantel–Haenszel Tests
by J. C. W. Rayner and Paul Rippon
Stats 2018, 1(1), 98-111; https://doi.org/10.3390/stats1010008 - 26 Sep 2018
Cited by 5 | Viewed by 3818
Abstract
The Cochran–Mantel–Haenszel (CMH) methodology is a suite of tests applicable to particular tables of count data. The inference is conditional on the treatment and outcome totals on each stratum being known before sighting the data. The CMH tests are important for analysing randomised [...] Read more.
The Cochran–Mantel–Haenszel (CMH) methodology is a suite of tests applicable to particular tables of count data. The inference is conditional on the treatment and outcome totals on each stratum being known before sighting the data. The CMH tests are important for analysing randomised blocks data when the responses are categorical rather than continuous. This overview of some recent extensions to CMH testing first describes the traditional CMH tests and then explores new alternative presentations of the ordinal CMH tests. Next, the ordinal CMH tests will be extended so they can be used to test for higher moment effects. Finally, unconditional analogues of the extended CMH tests will be developed. Full article
6 pages, 231 KiB  
Article
Smooth Tests of Fit for the Lindley Distribution
by D. J. Best and J. C. W. Rayner
Stats 2018, 1(1), 92-97; https://doi.org/10.3390/stats1010007 - 22 Jul 2018
Cited by 2 | Viewed by 2239
Abstract
We consider the little-known one parameter Lindley distribution. This distribution may be of interest as it appears to be more flexible than the exponential distribution, the Lindley fitting more data than the exponential. We give smooth tests of fit for this distribution. The [...] Read more.
We consider the little-known one parameter Lindley distribution. This distribution may be of interest as it appears to be more flexible than the exponential distribution, the Lindley fitting more data than the exponential. We give smooth tests of fit for this distribution. The smooth test for the Lindley has power comparable with the Anderson-Darling test. Advantages of the smooth test are discussed. Examples that illustrate the flexibility of this distributions is given. Full article
15 pages, 518 KiB  
Article
A New Burr XII-Weibull-Logarithmic Distribution for Survival and Lifetime Data Analysis: Model, Theory and Applications
by Broderick O. Oluyede, Boikanyo Makubate, Adeniyi F. Fagbamigbe and Precious Mdlongwa
Stats 2018, 1(1), 77-91; https://doi.org/10.3390/stats1010006 - 09 Jun 2018
Cited by 6 | Viewed by 3006
Abstract
A new compound distribution called Burr XII-Weibull-Logarithmic (BWL) distribution is introduced and its properties are explored. This new distribution contains several new and well known sub-models, including Burr XII-Exponential-Logarithmic, Burr XII-Rayleigh-Logarithmic, Burr XII-Logarithmic, Lomax-Exponential-Logarithmic, Lomax–Rayleigh-Logarithmic, Weibull, Rayleigh, Lomax, Lomax-Logarithmic, Weibull-Logarithmic, Rayleigh-Logarithmic, and Exponential-Logarithmic [...] Read more.
A new compound distribution called Burr XII-Weibull-Logarithmic (BWL) distribution is introduced and its properties are explored. This new distribution contains several new and well known sub-models, including Burr XII-Exponential-Logarithmic, Burr XII-Rayleigh-Logarithmic, Burr XII-Logarithmic, Lomax-Exponential-Logarithmic, Lomax–Rayleigh-Logarithmic, Weibull, Rayleigh, Lomax, Lomax-Logarithmic, Weibull-Logarithmic, Rayleigh-Logarithmic, and Exponential-Logarithmic distributions. Some statistical properties of the proposed distribution including moments and conditional moments are presented. Maximum likelihood estimation technique is used to estimate the model parameters. Finally, applications of the model to real data sets are presented to illustrate the usefulness of the proposed distribution. Full article
Show Figures

Figure 1

29 pages, 599 KiB  
Article
The Impact of Misspecified Random Effect Distribution in a Weibull Regression Mixed Model
by Freddy Hernández and Viviana Giampaoli
Stats 2018, 1(1), 48-76; https://doi.org/10.3390/stats1010005 - 31 May 2018
Cited by 3 | Viewed by 3381
Abstract
Mixed models are useful tools for analyzing clustered and longitudinal data. These models assume that random effects are normally distributed. However, this may be unrealistic or restrictive when representing information of the data. Several papers have been published to quantify the impacts of [...] Read more.
Mixed models are useful tools for analyzing clustered and longitudinal data. These models assume that random effects are normally distributed. However, this may be unrealistic or restrictive when representing information of the data. Several papers have been published to quantify the impacts of misspecification of the shape of the random effects in mixed models. Notably, these studies primarily concentrated their efforts on models with response variables that have normal, logistic and Poisson distributions, and the results were not conclusive. As such, we investigated the misspecification of the shape of the random effects in a Weibull regression mixed model with random intercepts in the two parameters of the Weibull distribution. Through an extensive simulation study considering six random effect distributions and assuming normality for the random effects in the estimation procedure, we found an impact of misspecification on the estimations of the fixed effects associated with the second parameter σ of the Weibull distribution. Additionally, the variance components of the model were also affected by the misspecification. Full article
Show Figures

Figure 1

16 pages, 910 KiB  
Article
A New Extended Birnbaum–Saunders Model: Properties, Regression and Applications
by Gauss Moutinho Cordeiro, Maria Do Carmo Soares De Lima, Edwin Moisés Marcos Ortega and Adriano Kamimura Suzuki
Stats 2018, 1(1), 32-47; https://doi.org/10.3390/stats1010004 - 18 May 2018
Cited by 2 | Viewed by 2878
Abstract
We propose an extended fatigue lifetime model called the odd log-logistic Birnbaum–Saunders–Poisson distribution, which includes as special cases the Birnbaum–Saunders and odd log-logistic Birnbaum–Saunders distributions. We obtain some structural properties of the new distribution. We define a new extended regression model based on [...] Read more.
We propose an extended fatigue lifetime model called the odd log-logistic Birnbaum–Saunders–Poisson distribution, which includes as special cases the Birnbaum–Saunders and odd log-logistic Birnbaum–Saunders distributions. We obtain some structural properties of the new distribution. We define a new extended regression model based on the logarithm of the odd log-logistic Birnbaum–Saunders–Poisson random variable. For censored data, we estimate the parameters of the regression model using maximum likelihood. We investigate the accuracy of the maximum likelihood estimates using Monte Carlo simulations. The importance of the proposed models, when compared to existing models, is illustrated by means of two real data sets. Full article
Show Figures

Figure 1

11 pages, 2508 KiB  
Article
Probability and Body Composition of Metabolic Syndrome in Young Adults: Use of the Bayes Theorem as Diagnostic Evidence of the Waist-to-Height Ratio
by Ashuin Kammar, María Elena Hernández-Hernández, Patricia López-Moreno, Angélica María Ortíz-Bueno and María De Lurdez Consuelo Martínez-Montaño
Stats 2018, 1(1), 21-31; https://doi.org/10.3390/stats1010003 - 16 May 2018
Cited by 1 | Viewed by 3175
Abstract
Metabolic syndrome (MS) directly increases the risk of cardiovascular diseases. Childhood and adulthood have been the most studied in MS, leaving aside the young adult population. This study aimed to compare the epidemiological probabilities between MS and different anthropometric parameters of body composition. [...] Read more.
Metabolic syndrome (MS) directly increases the risk of cardiovascular diseases. Childhood and adulthood have been the most studied in MS, leaving aside the young adult population. This study aimed to compare the epidemiological probabilities between MS and different anthropometric parameters of body composition. Using a cross-sectional study with the sample of 1351 young adults, different body composition parameters were obtained such as Waist Circumference (WC), Body Mass Index (BMI), Body Fat% (BF%), Waist-to-Height Ratio (WHtR), and Waist-Hip Ratio. The Bayes Theorem was applied to estimate the conditional probability that any subject developed MS with an altered anthropometric parameter of body composition. Areas under receiver operating characteristic curves (AUCs) and adjusted odds ratios of the five parameters were analyzed in their optimal cutoffs. The conditional probability of developing MS with an altered anthropometric parameter was 17% in WHtR, WC, and Waist-hip R. Furthermore, body composition parameters were adjusted by age, BMI, and gender. Only WHtR (OR = 9.43, CI = 3.4–26.13, p < 0.0001), and BF% (OR = 3.18, CI = 1.42–7.13, p = 0.005) were significant, and the sensitivity (84%) and the AUCs (86%) was higher in WHtR than other parameters. In young adults, the WHtR was the best predictor of metabolic syndrome. Full article
Show Figures

Figure 1

7 pages, 287 KiB  
Article
On Moments of Gamma—Exponentiated Functional Distribution
by Katarzyna Górska, Andrzej Horzela and Tibor K. Pogány
Stats 2018, 1(1), 14-20; https://doi.org/10.3390/stats1010002 - 30 Mar 2018
Viewed by 2549
Abstract
In this note we discuss the development of a new Gamma exponentiated functional GE ( α , h ) distribution, using the Gamma baseline distribution generating method by Zografos and Balakrishnan. The raw moments of the Gamma exponentiated functional [...] Read more.
In this note we discuss the development of a new Gamma exponentiated functional GE ( α , h ) distribution, using the Gamma baseline distribution generating method by Zografos and Balakrishnan. The raw moments of the Gamma exponentiated functional GE ( α , h ) distribution are derived. The related probability distribution class is characterized in terms of Lambert W-function. Full article
13 pages, 291 KiB  
Article
A Nonparametric Statistical Approach to Content Analysis of Items
by Diego Marcondes and Nilton Rogerio Marcondes
Stats 2018, 1(1), 1-13; https://doi.org/10.3390/stats1010001 - 01 Feb 2018
Cited by 2 | Viewed by 3250
Abstract
In order to use psychometric instruments to assess a multidimensional construct, we may decompose it into dimensions and, in order to assess each dimension, develop a set of items, so one may assess the construct as a whole, by assessing its dimensions. In [...] Read more.
In order to use psychometric instruments to assess a multidimensional construct, we may decompose it into dimensions and, in order to assess each dimension, develop a set of items, so one may assess the construct as a whole, by assessing its dimensions. In this scenario, content analysis of items aims to verify if the developed items are assessing the dimension they are supposed to by requesting the judgement of specialists in the studied construct about the dimension that the developed items assess. This paper aims to develop a nonparametric statistical approach based on the Cochran’s Q test to analyse the content of items in order to present a practical method to assess the consistency of the content analysis process; this is achieved by the development of a statistical test that seeks to determine if all the specialists have the same capability to judge the items. A simulation study is conducted to check the consistency of the test and it is applied to a real validation process. Full article
Next Issue
Back to TopTop