首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Pooling the relative risk (RR) across studies investigating rare events, for example, adverse events, via meta-analytical methods still presents a challenge to researchers. The main reason for this is the high probability of observing no events in treatment or control group or both, resulting in an undefined log RR (the basis of standard meta-analysis). Other technical challenges ensue, for example, the violation of normality assumptions, or bias due to exclusion of studies and application of continuity corrections, leading to poor performance of standard approaches. In the present simulation study, we compared three recently proposed alternative models (random-effects [RE] Poisson regression, RE zero-inflated Poisson [ZIP] regression, binomial regression) to the standard methods in conjunction with different continuity corrections and to different versions of beta-binomial regression. Based on our investigation of the models' performance in 162 different simulation settings informed by meta-analyses from the Cochrane database and distinguished by different underlying true effects, degrees of between-study heterogeneity, numbers of primary studies, group size ratios, and baseline risks, we recommend the use of the RE Poisson regression model. The beta-binomial model recommended by Kuss (2015) also performed well. Decent performance was also exhibited by the ZIP models, but they also had considerable convergence issues. We stress that these recommendations are only valid for meta-analyses with larger numbers of primary studies. All models are applied to data from two Cochrane reviews to illustrate differences between and issues of the models. Limitations as well as practical implications and recommendations are discussed; a flowchart summarizing recommendations is provided.  相似文献   

2.
Quantitative literature reviews such as meta-analysis are becoming common in evolutionary biology but may be strongly affected by publication biases. Using fail-safe numbers is a quick way to estimate whether publication bias is likely to be a problem for a specific study. However, previously suggested fail-safe calculations are unweighted and are not based on the framework in which most meta-analyses are performed. A general, weighted fail-safe calculation, grounded in the meta-analysis framework, applicable to both fixed- and random-effects models, is proposed. Recent meta-analyses published in Evolution are used for illustration.  相似文献   

3.
Likelihood-based methods of inference of population parameters from genetic data in structured populations have been implemented but still little tested in large networks of populations. In this work, a previous software implementation of inference in linear habitats is extended to two-dimensional habitats, and the coverage properties of confidence intervals are analyzed in both cases. Both standard likelihood and an efficient approximation are considered. The effects of misspecification of mutation model and dispersal distribution, and of spatial binning of samples, are considered. In the absence of model misspecification, the estimators have low bias, low mean square error, and the coverage properties of confidence intervals are consistent with theoretical expectations. Inferences of dispersal parameters and of the mutation rate are sensitive to misspecification or to approximations inherent to the coalescent algorithms used. In particular, coalescent approximations are not appropriate to infer the shape of the dispersal distribution. However, inferences of the neighborhood parameter (or of the product of population density and mean square dispersal rate) are generally robust with respect to complicating factors, such as misspecification of the mutation process and of the shape of the dispersal distribution, and with respect to spatial binning of samples. Likelihood inferences appear feasible in moderately sized networks of populations (up to 400 populations in this work), and they are more efficient than previous moment-based spatial regression method in realistic conditions.  相似文献   

4.
This paper focuses on inferences about the overall treatment effect in meta-analysis with normally distributed responses based on the concepts of generalized inference. A refined generalized pivotal quantity based on t distribution is presented and simulation study shows that it can provide confidence intervals with satisfactory coverage probabilities and perform hypothesis testing with satisfactory type-I error control at very small sample sizes.  相似文献   

5.
We present methods for causally interpretable meta-analyses that combine information from multiple randomized trials to draw causal inferences for a target population of substantive interest. We consider identifiability conditions, derive implications of the conditions for the law of the observed data, and obtain identification results for transporting causal inferences from a collection of independent randomized trials to a new target population in which experimental data may not be available. We propose an estimator for the potential outcome mean in the target population under each treatment studied in the trials. The estimator uses covariate, treatment, and outcome data from the collection of trials, but only covariate data from the target population sample. We show that it is doubly robust in the sense that it is consistent and asymptotically normal when at least one of the models it relies on is correctly specified. We study the finite sample properties of the estimator in simulation studies and demonstrate its implementation using data from a multicenter randomized trial.  相似文献   

6.
Hierarchical models are recommended for meta-analyzing diagnostic test accuracy (DTA) studies. The bivariate random-effects model is currently widely used to synthesize a pair of test sensitivity and specificity using logit transformation across studies. This model assumes a bivariate normal distribution for the random-effects. However, this assumption is restrictive and can be violated. When the assumption fails, inferences could be misleading. In this paper, we extended the current bivariate random-effects model by assuming a flexible bivariate skew-normal distribution for the random-effects in order to robustly model logit sensitivities and logit specificities. The marginal distribution of the proposed model is analytically derived so that parameter estimation can be performed using standard likelihood methods. The method of weighted-average is adopted to estimate the overall logit-transformed sensitivity and specificity. An extensive simulation study is carried out to investigate the performance of the proposed model compared to other standard models. Overall, the proposed model performs better in terms of confidence interval width of the average logit-transformed sensitivity and specificity compared to the standard bivariate linear mixed model and bivariate generalized linear mixed model. Simulations have also shown that the proposed model performed better than the well-established bivariate linear mixed model in terms of bias and comparable with regards to the root mean squared error (RMSE) of the between-study (co)variances. The proposed method is also illustrated using a published meta-analysis data.  相似文献   

7.
Meta-analysis of binary data is challenging when the event under investigation is rare, and standard models for random-effects meta-analysis perform poorly in such settings. In this simulation study, we investigate the performance of different random-effects meta-analysis models in terms of point and interval estimation of the pooled log odds ratio in rare events meta-analysis. First and foremost, we evaluate the performance of a hypergeometric-normal model from the family of generalized linear mixed models (GLMMs), which has been recommended, but has not yet been thoroughly investigated for rare events meta-analysis. Performance of this model is compared to performance of the beta-binomial model, which yielded favorable results in previous simulation studies, and to the performance of models that are frequently used in rare events meta-analysis, such as the inverse variance model and the Mantel–Haenszel method. In addition to considering a large number of simulation parameters inspired by real-world data settings, we study the comparative performance of the meta-analytic models under two different data-generating models (DGMs) that have been used in past simulation studies. The results of this study show that the hypergeometric-normal GLMM is useful for meta-analysis of rare events when moderate to large heterogeneity is present. In addition, our study reveals important insights with regard to the performance of the beta-binomial model under different DGMs from the binomial-normal family. In particular, we demonstrate that although misalignment of the beta-binomial model with the DGM affects its performance, it shows more robustness to the DGM than its competitors.  相似文献   

8.
Though stochastic models are widely used to describe single ion channel behaviour, statistical inference based on them has received little consideration. This paper describes techniques of statistical inference, in particular likelihood methods, suitable for Markov models incorporating limited time resolution by means of a discrete detection limit. To simplify the analysis, attention is restricted to two-state models, although the methods have more general applicability. Non-uniqueness of the mean open-time and mean closed-time estimators obtained by moment methods based on single exponential approximations to the apparent open-time and apparent closed-time distributions has been reported. The present study clarifies and extends this previous work by proving that, for such approximations, the likelihood equations as well as the moment equations (usually) have multiple solutions. Such non-uniqueness corresponds to non-identifiability of the statistical model for the apparent quantities. By contrast, higher-order approximations yield theoretically identifiable models. Likelihood-based estimation procedures are developed for both single exponential and bi-exponential approximations. The methods and results are illustrated by numerical examples based on literature and simulated data, with consideration given to empirical distributions and model control, likelihood plots, and point estimation and confidence regions.  相似文献   

9.
We present methods for solving and making statistical inferences about marginal attack rates based on observed death rates for contemporaneous mortality factors. The general method of solution involves solving a system of nonlinear equations which depend in part on competition coefficients that express the outcome when more than one agent attacks the same host individual. For two factors, we present a detailed analysis of the effect of varying this competition coefficient. Statistical inferences are illustrated using standard large sample approximations (the delta method) and the bootstrap, which is a resampling technique. We also extend the results to allow inferences for k-values.  相似文献   

10.
We consider the problem of meta-analyzing two-group studies that report the median of the outcome. Often, these studies are excluded from meta-analysis because there are no well-established statistical methods to pool the difference of medians. To include these studies in meta-analysis, several authors have recently proposed methods to estimate the sample mean and standard deviation from the median, sample size, and several commonly reported measures of spread. Researchers frequently apply these methods to estimate the difference of means and its variance for each primary study and pool the difference of means using inverse variance weighting. In this work, we develop several methods to directly meta-analyze the difference of medians. We conduct a simulation study evaluating the performance of the proposed median-based methods and the competing transformation-based methods. The simulation results show that the median-based methods outperform the transformation-based methods when meta-analyzing studies that report the median of the outcome, especially when the outcome is skewed. Moreover, we illustrate the various methods on a real-life data set.  相似文献   

11.
Yuan Y  Little RJ 《Biometrics》2009,65(2):487-496
Summary .  Consider a meta-analysis of studies with varying proportions of patient-level missing data, and assume that each primary study has made certain missing data adjustments so that the reported estimates of treatment effect size and variance are valid. These estimates of treatment effects can be combined across studies by standard meta-analytic methods, employing a random-effects model to account for heterogeneity across studies. However, we note that a meta-analysis based on the standard random-effects model will lead to biased estimates when the attrition rates of primary studies depend on the size of the underlying study-level treatment effect. Perhaps ignorable within each study, these types of missing data are in fact not ignorable in a meta-analysis. We propose three methods to correct the bias resulting from such missing data in a meta-analysis: reweighting the DerSimonian–Laird estimate by the completion rate; incorporating the completion rate into a Bayesian random-effects model; and inference based on a Bayesian shared-parameter model that includes the completion rate. We illustrate these methods through a meta-analysis of 16 published randomized trials that examined combined pharmacotherapy and psychological treatment for depression.  相似文献   

12.
Recent work on Bayesian inference of disease mapping models discusses the advantages of the fully Bayesian (FB) approach over its empirical Bayes (EB) counterpart, suggesting that FB posterior standard deviations of small-area relative risks are more reflective of the uncertainty associated with the relative risk estimation than counterparts based on EB inference, since the latter fail to account for the variability in the estimation of the hyperparameters. In this article, an EB bootstrap methodology for relative risk inference with accurate parametric EB confidence intervals is developed, illustrated, and contrasted with the hyperprior Bayes. We elucidate the close connection between the EB bootstrap methodology and hyperprior Bayes, present a comparison between FB inference via hybrid Markov chain Monte Carlo and EB inference via penalized quasi-likelihood, and illustrate the ability of parametric bootstrap procedures to adjust for the undercoverage in the "naive" EB interval estimates. We discuss the important roles that FB and EB methods play in risk inference, map interpretation, and real-life applications. The work is motivated by a recent analysis of small-area infant mortality rates in the province of British Columbia in Canada.  相似文献   

13.
Lu Xia  Bin Nan  Yi Li 《Biometrics》2023,79(1):344-357
Modeling and drawing inference on the joint associations between single-nucleotide polymorphisms and a disease has sparked interest in genome-wide associations studies. In the motivating Boston Lung Cancer Survival Cohort (BLCSC) data, the presence of a large number of single nucleotide polymorphisms of interest, though smaller than the sample size, challenges inference on their joint associations with the disease outcome. In similar settings, we find that neither the debiased lasso approach (van de Geer et al., 2014), which assumes sparsity on the inverse information matrix, nor the standard maximum likelihood method can yield confidence intervals with satisfactory coverage probabilities for generalized linear models. Under this “large n, diverging p” scenario, we propose an alternative debiased lasso approach by directly inverting the Hessian matrix without imposing the matrix sparsity assumption, which further reduces bias compared to the original debiased lasso and ensures valid confidence intervals with nominal coverage probabilities. We establish the asymptotic distributions of any linear combinations of the parameter estimates, which lays the theoretical ground for drawing inference. Simulations show that the proposed refined debiased estimating method performs well in removing bias and yields honest confidence interval coverage. We use the proposed method to analyze the aforementioned BLCSC data, a large-scale hospital-based epidemiology cohort study investigating the joint effects of genetic variants on lung cancer risks.  相似文献   

14.
State space methods have proven indispensable in neural data analysis. However, common methods for performing inference in state-space models with non-Gaussian observations rely on certain approximations which are not always accurate. Here we review direct optimization methods that avoid these approximations, but that nonetheless retain the computational efficiency of the approximate methods. We discuss a variety of examples, applying these direct optimization techniques to problems in spike train smoothing, stimulus decoding, parameter estimation, and inference of synaptic properties. Along the way, we point out connections to some related standard statistical methods, including spline smoothing and isotonic regression. Finally, we note that the computational methods reviewed here do not in fact depend on the state-space setting at all; instead, the key property we are exploiting involves the bandedness of certain matrices. We close by discussing some applications of this more general point of view, including Markov chain Monte Carlo methods for neural decoding and efficient estimation of spatially-varying firing rates.  相似文献   

15.
Drop-the-losers designs are statistical designs which have two stages of a trial separated by a data based decision. In the first stage k experimental treatments and a control are administered. During a transition period, the empirically best experimental treatment is selected for continuation into the second phase, along with the control. At the study's end, inference focuses on the comparison of the selected treatment with the control using both stages' data. Traditional methods used to make inferences based on both stages' data can yield tests with higher than advertised levels of significance and confidence intervals with lower than advertised confidence. For normally distributed data, methods are provided to correct these deficiencies, providing confidence intervals with accurate levels of confidence. Drop-the-losers designs are particularly applicable to biopharmaceutical clinical trials where they can allow Phase II and Phase III clinical trials to be conducted under a single protocol with the use of all available data.  相似文献   

16.
A unification of models for meta-analysis of diagnostic accuracy studies   总被引:1,自引:0,他引:1  
Studies of diagnostic accuracy require more sophisticated methods for their meta-analysis than studies of therapeutic interventions. A number of different, and apparently divergent, methods for meta-analysis of diagnostic studies have been proposed, including two alternative approaches that are statistically rigorous and allow for between-study variability: the hierarchical summary receiver operating characteristic (ROC) model (Rutter and Gatsonis, 2001) and bivariate random-effects meta-analysis (van Houwelingen and others, 1993), (van Houwelingen and others, 2002), (Reitsma and others, 2005). We show that these two models are very closely related, and define the circumstances in which they are identical. We discuss the different forms of summary model output suggested by the two approaches, including summary ROC curves, summary points, confidence regions, and prediction regions.  相似文献   

17.
Duval S  Tweedie R 《Biometrics》2000,56(2):455-463
We study recently developed nonparametric methods for estimating the number of missing studies that might exist in a meta-analysis and the effect that these studies might have had on its outcome. These are simple rank-based data augmentation techniques, which formalize the use of funnel plots. We show that they provide effective and relatively powerful tests for evaluating the existence of such publication bias. After adjusting for missing studies, we find that the point estimate of the overall effect size is approximately correct and coverage of the effect size confidence intervals is substantially improved, in many cases recovering the nominal confidence levels entirely. We illustrate the trim and fill method on existing meta-analyses of studies in clinical trials and psychometrics.  相似文献   

18.
This paper presents a new approach for confidence interval estimation of the between-study variance in meta-analysis with normally distributed responses based on the concepts of generalized variables. Simulation study shows that the coverage probabilities of the proposed confidence intervals are generally satisfactory. Moreover, the proposed approach can easily provide P -values for hypothesis testing. For meta-analysis of controlled clinical trials or epidemiological studies, within which the responses are normally distributed, the proposed approach is an ideal candidate for making inference about the between-study variance.  相似文献   

19.

Background

Overlapping meta-analyses on the same topic are now very common, and discordant results often occur. To explore why discordant results arise, we examined a common topic for overlapping meta-analyses- vitamin D supplements and fracture.

Methods and Findings

We identified 24 meta-analyses of vitamin D (with or without calcium) and fracture in a PubMed search in October 2013, and analysed a sample of 7 meta-analyses in the highest ranking general medicine journals. We used the AMSTAR tool to assess the quality of the meta-analyses, and compared their methodologies, analytic techniques and results. Applying the AMSTAR tool suggested the meta-analyses were generally of high quality. Despite this, there were important differences in trial selection, data extraction, and analytical methods that were only apparent after detailed assessment. 25 trials were included in at least one meta-analysis. Four meta-analyses included all eligible trials according to the stated inclusion and exclusion criteria, but the other 3 meta-analyses “missed” between 3 and 8 trials, and 2 meta-analyses included apparently ineligible trials. The relative risks used for individual trials differed between meta-analyses for total fracture in 10 of 15 trials, and for hip fracture in 6 of 12 trials, because of different outcome definitions and analytic approaches. The majority of differences (11/16) led to more favourable estimates of vitamin D efficacy compared to estimates derived from unadjusted intention-to-treat analyses using all randomised participants. The conclusions of the meta-analyses were discordant, ranging from strong statements that vitamin D prevents fractures to equally strong statements that vitamin D without calcium does not prevent fractures.

Conclusions

Substantial differences in trial selection, outcome definition and analytic methods between overlapping meta-analyses led to discordant estimates of the efficacy of vitamin D for fracture prevention. Strategies for conducting and reporting overlapping meta-analyses are required, to improve their accuracy and transparency.  相似文献   

20.
S Magnussen 《Génome》1992,35(6):931-938
A regression model to predict quantiles of narrow sense individual and family mean heritabilities is developed and used to predict confidence intervals either directly or via a generalized beta distribution model. Extensive simulations of balanced sib analysis trials in randomized complete block designs and normal distributed environmental and additive genetic effects confirmed that heritabilities follow a beta distribution even in cases with up to 10% of the data missing at random. The new model is both more accurate and more precise than commonly used alternatives based on "exact" chi 2 distributions and Satterthwaites approximations to the degrees of freedom. Estimates of the expected heritability and a Taylor approximation of the standard error of the heritability are needed as input to the quantile model. Applications of the presented models for estimating confidence intervals and as an aid in the design of experiments are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号