ijerph-logo

Journal Browser

Journal Browser

Methodological Innovations and Reflections-1

A special issue of International Journal of Environmental Research and Public Health (ISSN 1660-4601).

Deadline for manuscript submissions: closed (31 August 2015) | Viewed by 131629

Special Issue Information

Dear Colleagues,

The idea behind this Topical Collection of the journal dedicated to Methodological Innovations and Reflections, is not novel. Even the title of the Topical Collection owes its existence to the journal Epidemiologic Perspectives & Innovations that was discontinued in 2012 by the BMC. This is intentionally chosen. Our aim is to create the right environment for the open exchange of innovative ideas and reflections, concepts that, in and of themselves, may not be new to some fields (e.g., statistics or economics), but are unknown or under-appreciated in public health research (including, but not limited to, epidemiology, exposure sciences, and toxicology). We strive to stimulate dialogue about how we do science and (more importantly) how we could do it better. There may not be too much novelty about this general approach, but we feel strongly that we must talk about this in open literature for all to benefit in a similar way to how any institution of higher learning benefits from their seminar series: They create a safe and respectful environment to discuss innovation without the threat of being judged for making errors and to explore ideas that may, or may not, lead to wide adoption and/or substantive advances. The main goal of the Topical Collection is to advance methodology though debate, rather than by the publication of a single seminal article: A defined process that stimulates creativity, reflection and innovation is our aim. Of course, we also strive to bring out into the open some of the papers that are usually judged to be too simple for the theoretical statistical journals, and yet too complex for the applied journals. To this effect, we promise our readers and contributors to make editorial decisions that will reflect the soundness of the argument rather than the implications of their conclusions: Elegant mathematical arguments and logical interpretations that challenge the current orthodoxy are highly encouraged. We look forward to working with you in making this exciting endeavor a success. It is up to all of us to make this a reality. For new articles on this topic, please check our Topical Collection "Methodological Innovations and Reflections".

Dr. Igor Burstyn
Dr. George Luta
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. International Journal of Environmental Research and Public Health is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

306 KiB  
Article
Empirical Likelihood-Based ANOVA for Trimmed Means
by Mara Velina, Janis Valeinis, Luca Greco and George Luta
Int. J. Environ. Res. Public Health 2016, 13(10), 953; https://doi.org/10.3390/ijerph13100953 - 27 Sep 2016
Cited by 2 | Viewed by 3597
Abstract
In this paper, we introduce an alternative to Yuen’s test for the comparison of several population trimmed means. This nonparametric ANOVA type test is based on the empirical likelihood (EL) approach and extends the results for one population trimmed mean from Qin and [...] Read more.
In this paper, we introduce an alternative to Yuen’s test for the comparison of several population trimmed means. This nonparametric ANOVA type test is based on the empirical likelihood (EL) approach and extends the results for one population trimmed mean from Qin and Tsao (2002). The results of our simulation study indicate that for skewed distributions, with and without variance heterogeneity, Yuen’s test performs better than the new EL ANOVA test for trimmed means with respect to control over the probability of a type I error. This finding is in contrast with our simulation results for the comparison of means, where the EL ANOVA test for means performs better than Welch’s heteroscedastic F test. The analysis of a real data example illustrates the use of Yuen’s test and the new EL ANOVA test for trimmed means for different trimming levels. Based on the results of our study, we recommend the use of Yuen’s test for situations involving the comparison of population trimmed means between groups of interest. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
344 KiB  
Article
Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures
by Ionut Bebu, George Luta, Thomas Mathew and Brian K. Agan
Int. J. Environ. Res. Public Health 2016, 13(6), 605; https://doi.org/10.3390/ijerph13060605 - 18 Jun 2016
Cited by 6 | Viewed by 4501
Abstract
For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The [...] Read more.
For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR) from several studies, the number needed to treat (NNT), and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
3226 KiB  
Article
Applications of a Novel Clustering Approach Using Non-Negative Matrix Factorization to Environmental Research in Public Health
by Paul Fogel, Yann Gaston-Mathé, Douglas Hawkins, Fajwel Fogel, George Luta and S. Stanley Young
Int. J. Environ. Res. Public Health 2016, 13(5), 509; https://doi.org/10.3390/ijerph13050509 - 18 May 2016
Cited by 4 | Viewed by 5044
Abstract
Often data can be represented as a matrix, e.g., observations as rows and variables as columns, or as a doubly classified contingency table. Researchers may be interested in clustering the observations, the variables, or both. If the data is non-negative, then Non-negative Matrix [...] Read more.
Often data can be represented as a matrix, e.g., observations as rows and variables as columns, or as a doubly classified contingency table. Researchers may be interested in clustering the observations, the variables, or both. If the data is non-negative, then Non-negative Matrix Factorization (NMF) can be used to perform the clustering. By its nature, NMF-based clustering is focused on the large values. If the data is normalized by subtracting the row/column means, it becomes of mixed signs and the original NMF cannot be used. Our idea is to split and then concatenate the positive and negative parts of the matrix, after taking the absolute value of the negative elements. NMF applied to the concatenated data, which we call PosNegNMF, offers the advantages of the original NMF approach, while giving equal weight to large and small values. We use two public health datasets to illustrate the new method and compare it with alternative clustering methods, such as K-means and clustering methods based on the Singular Value Decomposition (SVD) or Principal Component Analysis (PCA). With the exception of situations where a reasonably accurate factorization can be achieved using the first SVD component, we recommend that the epidemiologists and environmental scientists use the new method to obtain clusters with improved quality and interpretability. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Graphical abstract

288 KiB  
Article
A Simulation-Based Comparison of Covariate Adjustment Methods for the Analysis of Randomized Controlled Trials
by Pierre Chaussé, Jin Liu and George Luta
Int. J. Environ. Res. Public Health 2016, 13(4), 414; https://doi.org/10.3390/ijerph13040414 - 11 Apr 2016
Cited by 2 | Viewed by 4408
Abstract
Covariate adjustment methods are frequently used when baseline covariate information is available for randomized controlled trials. Using a simulation study, we compared the analysis of covariance (ANCOVA) with three nonparametric covariate adjustment methods with respect to point and interval estimation for the difference [...] Read more.
Covariate adjustment methods are frequently used when baseline covariate information is available for randomized controlled trials. Using a simulation study, we compared the analysis of covariance (ANCOVA) with three nonparametric covariate adjustment methods with respect to point and interval estimation for the difference between means. The three alternative methods were based on important members of the generalized empirical likelihood (GEL) family, specifically on the empirical likelihood (EL) method, the exponential tilting (ET) method, and the continuous updated estimator (CUE) method. Two criteria were considered for the comparison of the four statistical methods: the root mean squared error and the empirical coverage of the nominal 95% confidence intervals for the difference between means. Based on the results of the simulation study, for sensitivity analysis purposes, we recommend the use of ANCOVA (with robust standard errors when heteroscedasticity is present) together with the CUE-based covariate adjustment method. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
1015 KiB  
Article
Pooling Bio-Specimens in the Presence of Measurement Error and Non-Linearity in Dose-Response: Simulation Study in the Context of a Birth Cohort Investigating Risk Factors for Autism Spectrum Disorders
by Karyn Heavner, Craig Newschaffer, Irva Hertz-Picciotto, Deborah Bennett and Igor Burstyn
Int. J. Environ. Res. Public Health 2015, 12(11), 14780-14799; https://doi.org/10.3390/ijerph121114780 - 19 Nov 2015
Viewed by 4872
Abstract
We sought to determine the potential effects of pooling on power, false positive rate (FPR), and bias of the estimated associations between hypothetical environmental exposures and dichotomous autism spectrum disorders (ASD) status. Simulated birth cohorts in which ASD outcome was assumed to have [...] Read more.
We sought to determine the potential effects of pooling on power, false positive rate (FPR), and bias of the estimated associations between hypothetical environmental exposures and dichotomous autism spectrum disorders (ASD) status. Simulated birth cohorts in which ASD outcome was assumed to have been ascertained with uncertainty were created. We investigated the impact on the power of the analysis (using logistic regression) to detect true associations with exposure (X1) and the FPR for a non-causal correlate of exposure (X2, r = 0.7) for a dichotomized ASD measure when the pool size, sample size, degree of measurement error variance in exposure, strength of the true association, and shape of the exposure-response curve varied. We found that there was minimal change (bias) in the measures of association for the main effect (X1). There is some loss of power but there is less chance of detecting a false positive result for pooled compared to individual level models. The number of pools had more effect on the power and FPR than the overall sample size. This study supports the use of pooling to reduce laboratory costs while maintaining statistical efficiency in scenarios similar to the simulated prospective risk-enriched ASD cohort. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Figure 1

697 KiB  
Article
A Discriminant Function Approach to Adjust for Processing and Measurement Error When a Biomarker is Assayed in Pooled Samples
by Robert H. Lyles, Dane Van Domelen, Emily M. Mitchell and Enrique F. Schisterman
Int. J. Environ. Res. Public Health 2015, 12(11), 14723-14740; https://doi.org/10.3390/ijerph121114723 - 18 Nov 2015
Cited by 6 | Viewed by 4204
Abstract
Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly [...] Read more.
Pooling biological specimens prior to performing expensive laboratory assays has been shown to be a cost effective approach for estimating parameters of interest. In addition to requiring specialized statistical techniques, however, the pooling of samples can introduce assay errors due to processing, possibly in addition to measurement error that may be present when the assay is applied to individual samples. Failure to account for these sources of error can result in biased parameter estimates and ultimately faulty inference. Prior research addressing biomarker mean and variance estimation advocates hybrid designs consisting of individual as well as pooled samples to account for measurement and processing (or pooling) error. We consider adapting this approach to the problem of estimating a covariate-adjusted odds ratio (OR) relating a binary outcome to a continuous exposure or biomarker level assessed in pools. In particular, we explore the applicability of a discriminant function-based analysis that assumes normal residual, processing, and measurement errors. A potential advantage of this method is that maximum likelihood estimation of the desired adjusted log OR is straightforward and computationally convenient. Moreover, in the absence of measurement and processing error, the method yields an efficient unbiased estimator for the parameter of interest assuming normal residual errors. We illustrate the approach using real data from an ancillary study of the Collaborative Perinatal Project, and we use simulations to demonstrate the ability of the proposed estimators to alleviate bias due to measurement and processing error. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
1868 KiB  
Article
An Enhanced Variable Two-Step Floating Catchment Area Method for Measuring Spatial Accessibility to Residential Care Facilities in Nanjing
by Jianhua Ni, Jinyin Wang, Yikang Rui, Tianlu Qian and Jiechen Wang
Int. J. Environ. Res. Public Health 2015, 12(11), 14490-14504; https://doi.org/10.3390/ijerph121114490 - 13 Nov 2015
Cited by 51 | Viewed by 6924
Abstract
Civil administration departments require reliable measures of accessibility so that residential care facility shortage areas can be accurately identified. Building on previous research, this paper proposes an enhanced variable two-step floating catchment area (EV2SFCA) method that determines facility catchment sizes by dynamically summing [...] Read more.
Civil administration departments require reliable measures of accessibility so that residential care facility shortage areas can be accurately identified. Building on previous research, this paper proposes an enhanced variable two-step floating catchment area (EV2SFCA) method that determines facility catchment sizes by dynamically summing the population around the facility until the facility-to-population ratio (FPR) is less than the FPR threshold (FPRT). To minimize the errors from the supply and demand catchments being mismatched, this paper proposes that the facility and population catchment areas must both contain the other location in calculating accessibility. A case study evaluating spatial accessibility to residential care facilities in Nanjing demonstrates that the proposed method is effective in accurately determining catchment sizes and identifying details in the variation of spatial accessibility. The proposed method can be easily applied to assess other public healthcare facilities, and can provide guidance to government departments on issues of spatial planning and identification of shortage and excess areas. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Figure 1

1068 KiB  
Article
Assessment of Offspring DNA Methylation across the Lifecourse Associated with Prenatal Maternal Smoking Using Bayesian Mixture Modelling
by Frank De Vocht, Andrew J Simpkin, Rebecca C. Richmond, Caroline Relton and Kate Tilling
Int. J. Environ. Res. Public Health 2015, 12(11), 14461-14476; https://doi.org/10.3390/ijerph121114461 - 13 Nov 2015
Cited by 11 | Viewed by 4582
Abstract
A growing body of research has implicated DNA methylation as a potential mediator of the effects of maternal smoking in pregnancy on offspring ill-health. Data were available from a UK birth cohort of children with DNA methylation measured at birth, age 7 and [...] Read more.
A growing body of research has implicated DNA methylation as a potential mediator of the effects of maternal smoking in pregnancy on offspring ill-health. Data were available from a UK birth cohort of children with DNA methylation measured at birth, age 7 and 17. One issue when analysing genome-wide DNA methylation data is the correlation of methylation levels between CpG sites, though this can be crudely bypassed using a data reduction method. In this manuscript we investigate the effect of sustained maternal smoking in pregnancy on longitudinal DNA methylation in their offspring using a Bayesian hierarchical mixture model. This model avoids the data reduction used in previous analyses. Four of the 28 previously identified, smoking related CpG sites were shown to have offspring methylation related to maternal smoking using this method, replicating findings in well-known smoking related genes MYO1G and GFI1. Further weak associations were found at the AHRR and CYP1A1 loci. In conclusion, we have demonstrated the utility of the Bayesian mixture model method for investigation of longitudinal DNA methylation data and this method should be considered for use in whole genome applications. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Figure 1

891 KiB  
Article
A Case Study Perspective on Working with ProUCL and a State Environmental Agency in Determining Background Threshold Values
by David L. Daniel
Int. J. Environ. Res. Public Health 2015, 12(10), 12905-12923; https://doi.org/10.3390/ijerph121012905 - 15 Oct 2015
Cited by 5 | Viewed by 4834
Abstract
ProUCL is a software package made available by the Environmental Protection Agency (EPA) to provide environmental scientists with better tools with which to conduct statistical analyses. ProUCL has been in production for over ten years and is in its fifth major version. In [...] Read more.
ProUCL is a software package made available by the Environmental Protection Agency (EPA) to provide environmental scientists with better tools with which to conduct statistical analyses. ProUCL has been in production for over ten years and is in its fifth major version. In time, it has included more sophisticated and appropriate analysis tools. However, there is still substantial criticism of it among statisticians for its various omissions and even its philosophical approach. Due to limited resources, some state agencies have set ProUCL as a standard by which all state-mandated environmental analyses are compared, despite the EPA’s more open acceptance of other software products and methodologies. As such, it can be difficult for state-supervised sites to convince the state to allow the use of more appropriate methodologies or different software. In the current case study, several such instances arose and substantial resources were invested to demonstrate the appropriateness of alternative methodologies, sometimes without acquiring acceptance by the state despite sound statistical demonstration. In particular, efforts were made to address: inappropriate outlier detection, upper tolerance limit (UTL) calculations based on gamma distributions when non-detects were present, and inappropriate use of nonparametric UTL formulas. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Figure 1

1026 KiB  
Article
Quantifying and Adjusting for Disease Misclassification Due to Loss to Follow-Up in Historical Cohort Mortality Studies
by Laura L. F. Scott and George Maldonado
Int. J. Environ. Res. Public Health 2015, 12(10), 12834-12846; https://doi.org/10.3390/ijerph121012834 - 15 Oct 2015
Cited by 4 | Viewed by 4117
Abstract
The purpose of this analysis was to quantify and adjust for disease misclassification from loss to follow-up in a historical cohort mortality study of workers where exposure was categorized as a multi-level variable. Disease classification parameters were defined using 2008 mortality data for [...] Read more.
The purpose of this analysis was to quantify and adjust for disease misclassification from loss to follow-up in a historical cohort mortality study of workers where exposure was categorized as a multi-level variable. Disease classification parameters were defined using 2008 mortality data for the New Zealand population and the proportions of known deaths observed for the cohort. The probability distributions for each classification parameter were constructed to account for potential differences in mortality due to exposure status, gender, and ethnicity. Probabilistic uncertainty analysis (bias analysis), which uses Monte Carlo techniques, was then used to sample each parameter distribution 50,000 times, calculating adjusted odds ratios (ORDM-LTF) that compared the mortality of workers with the highest cumulative exposure to those that were considered never-exposed. The geometric mean ORDM-LTF ranged between 1.65 (certainty interval (CI): 0.50–3.88) and 3.33 (CI: 1.21–10.48), and the geometric mean of the disease-misclassification error factor (eDM-LTF), which is the ratio of the observed odds ratio to the adjusted odds ratio, had a range of 0.91 (CI: 0.29–2.52) to 1.85 (CI: 0.78–6.07). Only when workers in the highest exposure category were more likely than those never-exposed to be misclassified as non-cases did the ORDM-LTF frequency distributions shift further away from the null. The application of uncertainty analysis to historical cohort mortality studies with multi-level exposures can provide valuable insight into the magnitude and direction of study error resulting from losses to follow-up. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Figure 1

725 KiB  
Article
Assessment of Residential History Generation Using a Public-Record Database
by David C. Wheeler and Aobo Wang
Int. J. Environ. Res. Public Health 2015, 12(9), 11670-11682; https://doi.org/10.3390/ijerph120911670 - 17 Sep 2015
Cited by 32 | Viewed by 4706
Abstract
In studies of disease with potential environmental risk factors, residential location is often used as a surrogate for unknown environmental exposures or as a basis for assigning environmental exposures. These studies most typically use the residential location at the time of diagnosis due [...] Read more.
In studies of disease with potential environmental risk factors, residential location is often used as a surrogate for unknown environmental exposures or as a basis for assigning environmental exposures. These studies most typically use the residential location at the time of diagnosis due to ease of collection. However, previous residential locations may be more useful for risk analysis because of population mobility and disease latency. When residential histories have not been collected in a study, it may be possible to generate them through public-record databases. In this study, we evaluated the ability of a public-records database from LexisNexis to provide residential histories for subjects in a geographically diverse cohort study. We calculated 11 performance metrics comparing study-collected addresses and two address retrieval services from LexisNexis. We found 77% and 90% match rates for city and state and 72% and 87% detailed address match rates with the basic and enhanced services, respectively. The enhanced LexisNexis service covered 86% of the time at residential addresses recorded in the study. The mean match rate for detailed address matches varied spatially over states. The results suggest that public record databases can be useful for reconstructing residential histories for subjects in epidemiologic studies. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Figure 1

399 KiB  
Article
A Bayesian Approach to Account for Misclassification and Overdispersion in Count Data
by Wenqi Wu, James Stamey and David Kahle
Int. J. Environ. Res. Public Health 2015, 12(9), 10648-10661; https://doi.org/10.3390/ijerph120910648 - 28 Aug 2015
Cited by 4 | Viewed by 4383
Abstract
Count data are subject to considerable sources of what is often referred to as non-sampling error. Errors such as misclassification, measurement error and unmeasured confounding can lead to substantially biased estimators. It is strongly recommended that epidemiologists not only acknowledge these sorts of [...] Read more.
Count data are subject to considerable sources of what is often referred to as non-sampling error. Errors such as misclassification, measurement error and unmeasured confounding can lead to substantially biased estimators. It is strongly recommended that epidemiologists not only acknowledge these sorts of errors in data, but incorporate sensitivity analyses into part of the total data analysis. We extend previous work on Poisson regression models that allow for misclassification by thoroughly discussing the basis for the models and allowing for extra-Poisson variability in the form of random effects. Via simulation we show the improvements in inference that are brought about by accounting for both the misclassification and the overdispersion. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Graphical abstract

2133 KiB  
Article
A Simulation Study of Categorizing Continuous Exposure Variables Measured with Error in Autism Research: Small Changes with Large Effects
by Karyn Heavner and Igor Burstyn
Int. J. Environ. Res. Public Health 2015, 12(8), 10198-10234; https://doi.org/10.3390/ijerph120810198 - 24 Aug 2015
Cited by 4 | Viewed by 4926
Abstract
Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum [...] Read more.
Variation in the odds ratio (OR) resulting from selection of cutoffs for categorizing continuous variables is rarely discussed. We present results for the effect of varying cutoffs used to categorize a mismeasured exposure in a simulated population in the context of autism spectrum disorders research. Simulated cohorts were created with three distinct exposure-outcome curves and three measurement error variances for the exposure. ORs were calculated using logistic regression for 61 cutoffs (mean ± 3 standard deviations) used to dichotomize the observed exposure. ORs were calculated for five categories with a wide range for the cutoffs. For each scenario and cutoff, the OR, sensitivity, and specificity were calculated. The three exposure-outcome relationships had distinctly shaped OR (versus cutoff) curves, but increasing measurement error obscured the shape. At extreme cutoffs, there was non-monotonic oscillation in the ORs that cannot be attributed to “small numbers.” Exposure misclassification following categorization of the mismeasured exposure was differential, as predicted by theory. Sensitivity was higher among cases and specificity among controls. Cutoffs chosen for categorizing continuous variables can have profound effects on study results. When measurement error is not too great, the shape of the OR curve may provide insight into the true shape of the exposure-disease relationship. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Figure 1

786 KiB  
Article
Can Public Health Risk Assessment Using Risk Matrices Be Misleading?
by Shabnam Vatanpour, Steve E. Hrudey and Irina Dinu
Int. J. Environ. Res. Public Health 2015, 12(8), 9575-9588; https://doi.org/10.3390/ijerph120809575 - 14 Aug 2015
Cited by 30 | Viewed by 21528
Abstract
The risk assessment matrix is a widely accepted, semi-quantitative tool for assessing risks, and setting priorities in risk management. Although the method can be useful to promote discussion to distinguish high risks from low risks, a published critique described a problem when the [...] Read more.
The risk assessment matrix is a widely accepted, semi-quantitative tool for assessing risks, and setting priorities in risk management. Although the method can be useful to promote discussion to distinguish high risks from low risks, a published critique described a problem when the frequency and severity of risks are negatively correlated. A theoretical analysis showed that risk predictions could be misleading. We evaluated a practical public health example because it provided experiential risk data that allowed us to assess the practical implications of the published concern that risk matrices would make predictions that are worse than random. We explored this predicted problem by constructing a risk assessment matrix using a public health risk scenario—Tainted blood transfusion infection risk—That provides negative correlation between harm frequency and severity. We estimated the risk from the experiential data and compared these estimates with those provided by the risk assessment matrix. Although we validated the theoretical concern, for these authentic experiential data, the practical scope of the problem was limited. The risk matrix has been widely used in risk assessment. This method should not be abandoned wholesale, but users must address the source of the problem, apply the risk matrix with a full understanding of this problem and use matrix predictions to inform, but not drive decision-making. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Figure 1

765 KiB  
Article
Model Averaging for Improving Inference from Causal Diagrams
by Ghassan B. Hamra, Jay S. Kaufman and Anjel Vahratian
Int. J. Environ. Res. Public Health 2015, 12(8), 9391-9407; https://doi.org/10.3390/ijerph120809391 - 11 Aug 2015
Cited by 5 | Viewed by 8053
Abstract
Model selection is an integral, yet contentious, component of epidemiologic research. Unfortunately, there remains no consensus on how to identify a single, best model among multiple candidate models. Researchers may be prone to selecting the model that best supports their a priori, [...] Read more.
Model selection is an integral, yet contentious, component of epidemiologic research. Unfortunately, there remains no consensus on how to identify a single, best model among multiple candidate models. Researchers may be prone to selecting the model that best supports their a priori, preferred result; a phenomenon referred to as “wish bias”. Directed acyclic graphs (DAGs), based on background causal and substantive knowledge, are a useful tool for specifying a subset of adjustment variables to obtain a causal effect estimate. In many cases, however, a DAG will support multiple, sufficient or minimally-sufficient adjustment sets. Even though all of these may theoretically produce unbiased effect estimates they may, in practice, yield somewhat distinct values, and the need to select between these models once again makes the research enterprise vulnerable to wish bias. In this work, we suggest combining adjustment sets with model averaging techniques to obtain causal estimates based on multiple, theoretically-unbiased models. We use three techniques for averaging the results among multiple candidate models: information criteria weighting, inverse variance weighting, and bootstrapping. We illustrate these approaches with an example from the Pregnancy, Infection, and Nutrition (PIN) study. We show that each averaging technique returns similar, model averaged causal estimates. An a priori strategy of model averaging provides a means of integrating uncertainty in selection among candidate, causal models, while also avoiding the temptation to report the most attractive estimate from a suite of equally valid alternatives. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Figure 1

727 KiB  
Article
Gateway Effects: Why the Cited Evidence Does Not Support Their Existence for Low-Risk Tobacco Products (and What Evidence Would)
by Carl V. Phillips
Int. J. Environ. Res. Public Health 2015, 12(5), 5439-5464; https://doi.org/10.3390/ijerph120505439 - 21 May 2015
Cited by 21 | Viewed by 18764
Abstract
It is often claimed that low-risk drugs still create harm because of “gateway effects”, in which they cause the use of a high-risk alternative. Such claims are popular among opponents of tobacco harm reduction, claiming that low-risk tobacco products (e.g., e-cigarettes, smokeless tobacco) [...] Read more.
It is often claimed that low-risk drugs still create harm because of “gateway effects”, in which they cause the use of a high-risk alternative. Such claims are popular among opponents of tobacco harm reduction, claiming that low-risk tobacco products (e.g., e-cigarettes, smokeless tobacco) cause people to start smoking, sometimes backed by empirical studies that ostensibly support the claim. However, these studies consistently ignore the obvious alternative causal pathways, particularly that observed associations might represent causation in the opposite direction (smoking causes people to seek low-risk alternatives) or confounding (the same individual characteristics increase the chance of using any tobacco product). Due to these complications, any useful analysis must deal with simultaneity and confounding by common cause. In practice, existing analyses seem almost as if they were designed to provide teaching examples about drawing simplistic and unsupported causal conclusions from observed associations. The present analysis examines what evidence and research strategies would be needed to empirically detect such a gateway effect, if there were one, explaining key methodological concepts including causation and confounding, examining the logic of the claim, identifying potentially useful data, and debunking common fallacies on both sides of the argument, as well as presenting an extended example of proper empirical testing. The analysis demonstrates that none of the empirical studies to date that are purported to show a gateway effect from tobacco harm reduction products actually does so. The observations and approaches can be generalized to other cases where observed association of individual characteristics in cross-sectional data could result from any of several causal relationships. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
575 KiB  
Article
Resampling Methods Improve the Predictive Power of Modeling in Class-Imbalanced Datasets
by Paul H. Lee
Int. J. Environ. Res. Public Health 2014, 11(9), 9776-9789; https://doi.org/10.3390/ijerph110909776 - 18 Sep 2014
Cited by 44 | Viewed by 7000
Abstract
In the medical field, many outcome variables are dichotomized, and the two possible values of a dichotomized variable are referred to as classes. A dichotomized dataset is class-imbalanced if it consists mostly of one class, and performance of common classification models on this [...] Read more.
In the medical field, many outcome variables are dichotomized, and the two possible values of a dichotomized variable are referred to as classes. A dichotomized dataset is class-imbalanced if it consists mostly of one class, and performance of common classification models on this type of dataset tends to be suboptimal. To tackle such a problem, resampling methods, including oversampling and undersampling can be used. This paper aims at illustrating the effect of resampling methods using the National Health and Nutrition Examination Survey (NHANES) wave 2009–2010 dataset. A total of 4677 participants aged ≥20 without self-reported diabetes and with valid blood test results were analyzed. The Classification and Regression Tree (CART) procedure was used to build a classification model on undiagnosed diabetes. A participant demonstrated evidence of diabetes according to WHO diabetes criteria. Exposure variables included demographics and socio-economic status. CART models were fitted using a randomly selected 70% of the data (training dataset), and area under the receiver operating characteristic curve (AUC) was computed using the remaining 30% of the sample for evaluation (testing dataset). CART models were fitted using the training dataset, the oversampled training dataset, the weighted training dataset, and the undersampled training dataset. In addition, resampling case-to-control ratio of 1:1, 1:2, and 1:4 were examined. Resampling methods on the performance of other extensions of CART (random forests and generalized boosted trees) were also examined. CARTs fitted on the oversampled (AUC = 0.70) and undersampled training data (AUC = 0.74) yielded a better classification power than that on the training data (AUC = 0.65). Resampling could also improve the classification power of random forests and generalized boosted trees. To conclude, applying resampling methods in a class-imbalanced dataset improved the classification power of CART, random forests, and generalized boosted trees. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Figure 1

Review

Jump to: Research, Other

1694 KiB  
Review
Spatial and Spatio-Temporal Models for Modeling Epidemiological Data with Excess Zeros
by Ali Arab
Int. J. Environ. Res. Public Health 2015, 12(9), 10536-10548; https://doi.org/10.3390/ijerph120910536 - 28 Aug 2015
Cited by 49 | Viewed by 8071
Abstract
Epidemiological data often include excess zeros. This is particularly the case for data on rare conditions, diseases that are not common in specific areas or specific time periods, and conditions and diseases that are hard to detect or on the rise. In this [...] Read more.
Epidemiological data often include excess zeros. This is particularly the case for data on rare conditions, diseases that are not common in specific areas or specific time periods, and conditions and diseases that are hard to detect or on the rise. In this paper, we provide a review of methods for modeling data with excess zeros with focus on count data, namely hurdle and zero-inflated models, and discuss extensions of these models to data with spatial and spatio-temporal dependence structures. We consider a Bayesian hierarchical framework to implement spatial and spatio-temporal models for data with excess zeros. We further review current implementation methods and computational tools. Finally, we provide a case study on five-year counts of confirmed cases of Lyme disease in Illinois at the county level. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Figure 1

Other

Jump to: Research, Review

2205 KiB  
Concept Paper
Effects of Non-Differential Exposure Misclassification on False Conclusions in Hypothesis-Generating Studies
by Igor Burstyn, Yunwen Yang and A. Robert Schnatter
Int. J. Environ. Res. Public Health 2014, 11(10), 10951-10966; https://doi.org/10.3390/ijerph111010951 - 21 Oct 2014
Cited by 20 | Viewed by 6239
Abstract
Despite the theoretical success of obviating the need for hypothesis-generating studies, they live on in epidemiological practice. Cole asserted that “… there is boundless number of hypotheses that could be generated, nearly all of them wrong” and urged us to focus on evaluating [...] Read more.
Despite the theoretical success of obviating the need for hypothesis-generating studies, they live on in epidemiological practice. Cole asserted that “… there is boundless number of hypotheses that could be generated, nearly all of them wrong” and urged us to focus on evaluating “credibility of hypothesis”. Adopting a Bayesian approach, we put this elegant logic into quantitative terms at the study planning stage for studies where the prior belief in the null hypothesis is high (i.e., “hypothesis-generating” studies). We consider not only type I and II errors (as is customary) but also the probabilities of false positive and negative results, taking into account typical imperfections in the data. We concentrate on a common source of imperfection in the data: non-differential misclassification of binary exposure classifier. In context of an unmatched case-control study, we demonstrate—both theoretically and via simulations—that although non-differential exposure misclassification is expected to attenuate real effect estimates, leading to the loss of ability to detect true effects, there is also a concurrent increase in false positives. Unfortunately, most investigators interpret their findings from such work as being biased towards the null rather than considering that they are no less likely to be false signals. The likelihood of false positives dwarfed the false negative rate under a wide range of studied settings. We suggest that instead of investing energy into understanding credibility of dubious hypotheses, applied disciplines such as epidemiology, should instead focus attention on understanding consequences of pursuing specific hypotheses, while accounting for the probability that the observed “statistically significant” association may be qualitatively spurious. Full article
(This article belongs to the Special Issue Methodological Innovations and Reflections-1)
Show Figures

Figure 1

Back to TopTop