首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Yuan Y  Little RJ 《Biometrics》2009,65(2):487-496
Summary .  Consider a meta-analysis of studies with varying proportions of patient-level missing data, and assume that each primary study has made certain missing data adjustments so that the reported estimates of treatment effect size and variance are valid. These estimates of treatment effects can be combined across studies by standard meta-analytic methods, employing a random-effects model to account for heterogeneity across studies. However, we note that a meta-analysis based on the standard random-effects model will lead to biased estimates when the attrition rates of primary studies depend on the size of the underlying study-level treatment effect. Perhaps ignorable within each study, these types of missing data are in fact not ignorable in a meta-analysis. We propose three methods to correct the bias resulting from such missing data in a meta-analysis: reweighting the DerSimonian–Laird estimate by the completion rate; incorporating the completion rate into a Bayesian random-effects model; and inference based on a Bayesian shared-parameter model that includes the completion rate. We illustrate these methods through a meta-analysis of 16 published randomized trials that examined combined pharmacotherapy and psychological treatment for depression.  相似文献   

2.
The understanding of individual differences in response to threat (e.g., attentional bias) is important to better understand the development of anxiety disorders. Previous studies revealed only a small attentional bias in high-anxious (HA) subjects. One explanation for this finding may be the assumption that all HA-subjects show a constant attentional bias. Current models distinguish HA-subjects depending on their level of tolerance for uncertainty and for arousal. These models assume that only HA-subjects with intolerance for uncertainty but tolerance for arousal ("sensitizers") show an attentional bias, compared to HA-subjects with intolerance for uncertainty and intolerance for arousal ("fluctuating subjects"). Further, it is assumed that repressors (defined as intolerance for arousal but tolerance for uncertainty) would react with avoidance behavior when confronted with threatening stimuli. The present study investigated the influence of coping styles on attentional bias. After an extensive recruiting phase, 36 subjects were classified into three groups (sensitizers, fluctuating, and repressors). All subjects were exposed to presentations of happy and threatening faces, while recording gaze durations with an eye-tracker. The results showed that only sensitizer showed an attentional bias: they gazed longer at the threatening face rather than at the happy face during the first 500 ms. The results support the findings of the relationship between anxiety and attention and extend these by showing variations according to coping styles. The differentiation of subjects according to a multifaceted coping style allows a better prediction of the attentional bias and contributes to an insight into the complex interplay of personality, coping, and behavior.  相似文献   

3.
Recently developed capture-mark-recapture methods allow us to account for capture heterogeneity among individuals in the form of discrete mixtures and continuous individual random effects. In this article, we used simulations and two case studies to evaluate the effectiveness of continuously distributed individual random effects at removing potential bias due to capture heterogeneity, and to evaluate in what situation the added complexity of these models is justified. Simulations and case studies showed that ignoring individual capture heterogeneity generally led to a small negative bias in survival estimates and that individual random effects effectively removed this bias. As expected, accounting for capture heterogeneity also led to slightly less precise survival estimates. Our case studies also showed that accounting for capture heterogeneity increased in importance towards the end of study. Though ignoring capture heterogeneity led to a small bias in survival estimates, such bias may greatly impact management decisions. We advocate reducing potential heterogeneity at the sampling design stage. Where this is insufficient, we recommend modelling individual capture heterogeneity in situations such as when a large proportion of the individuals has a low detection probability (e.g. in the presence of floaters) and situations where the most recent survival estimates are of great interest (e.g. in applied conservation).  相似文献   

4.
Henmi M  Copas JB  Eguchi S 《Biometrics》2007,63(2):475-482
We study publication bias in meta-analysis by supposing there is a population (y, sigma) of studies which give treatment effect estimates y approximately N(theta, sigma(2)). A selection function describes the probability that each study is selected for review. The overall estimate of theta depends on the studies selected, and hence on the (unknown) selection function. Our previous paper, Copas and Jackson (2004, Biometrics 60, 146-153), studied the maximum bias over all possible selection functions which satisfy the weak condition that large studies (small sigma) are as likely, or more likely, to be selected than small studies (large sigma). This led to a worst-case sensitivity analysis, controlling for the overall fraction of studies selected. However, no account was taken of the effect of selection on the uncertainty in estimation. This article extends the previous work by finding corresponding confidence intervals and P-values, and hence a new sensitivity analysis for publication bias. Two examples are discussed.  相似文献   

5.
Huang Y  Pepe MS 《Biometrika》2009,96(4):991-997
The performance of a well-calibrated risk model for a binary disease outcome can be characterized by the population distribution of risk and displayed with the predictiveness curve. Better performance is characterized by a wider distribution of risk, since this corresponds to better risk stratification in the sense that more subjects are identified at low and high risk for the disease outcome. Although methods have been developed to estimate predictiveness curves from cohort studies, most studies to evaluate novel risk prediction markers employ case-control designs. Here we develop semiparametric methods that accommodate case-control data. The semiparametric methods are flexible, and naturally generalize methods previously developed for cohort data. Applications to prostate cancer risk prediction markers illustrate the methods.  相似文献   

6.
Summary .  We develop sample size formulas for studies aiming to test mean differences between a treatment and control group when all-or-none nonadherence (noncompliance) and selection bias are expected. Recent work by Fay, Halloran, and Follmann (2007, Biometrics 63, 465–474) addressed the increased variances within groups defined by treatment assignment when nonadherence occurs, compared to the scenario of full adherence, under the assumption of no selection bias. In this article, we extend the authors' approach to allow selection bias in the form of systematic differences in means and variances among latent adherence subgroups. We illustrate the approach by performing sample size calculations to plan clinical trials with and without pilot adherence data. Sample size formulas and tests for normally distributed outcomes are also developed in a Web Appendix that account for uncertainty of estimates from external or internal pilot data.  相似文献   

7.
Helicobacter pylori infection and colorectal cancer risk: a meta-analysis   总被引:6,自引:0,他引:6  
BACKGROUND: Several studies suggested an association between Helicobacter pylori infection and colorectal carcinoma or adenoma risk. However, different authors reported quite varying estimates. We carried out a systematic review and meta-analysis of published studies investigating this association and paid special attention to the possibility of publication bias and sources of heterogeneity between studies. Materials and METHODS: An extensive literature search and cross-referencing were performed to identify all published studies. Summary estimates were obtained using random-effects models. The presence of possible publication bias was assessed using different statistical approaches. RESULTS: In a meta-analysis of the 11 identified human studies, published between 1991 and 2002, a summary odds ratio of 1.4 (95% CI, 1.1-1.8) was estimated for the association between H. pylori infection and colorectal cancer risk. The graphical funnel plot appeared asymmetrical, but the formal statistical evaluations did not provide strong evidence of publication bias. The proportion of variation of study results because of heterogeneity was small (36.5%). CONCLUSIONS: The results of our meta-analysis are consistent with a possible small increase in risk of colorectal cancer because of H. pylori infection. However, the possibility of some publication bias cannot be ruled out, although it could not be statistically confirmed. Larger, better designed and better controlled studies are needed to clarify the situation.  相似文献   

8.
The selection of fossil data to use as calibration age priors in molecular divergence time estimates inherently links neontological methods with paleontological theory. However, few neontological studies have taken into account the possibility of a taphonomic bias in the fossil record when developing approaches to fossil calibration selection. The Sppil-Rongis effect may bias the first appearance of a lineage toward the recent causing most objective calibration selection approaches to erroneously exclude appropriate calibrations or to incorporate multiple calibrations that are too young to accurately represent the divergence times of target lineages. Using turtles as a case study, we develop a Bayesian extension to the fossil selection approach developed by Marshall (2008. A simple method for bracketing absolute divergence times on molecular phylogenies using multiple fossil calibrations points. Am. Nat. 171:726-742) that takes into account this taphonomic bias. Our method has the advantage of identifying calibrations that may bias age estimates to be too recent while incorporating uncertainty in phylogenetic parameter estimates such as tree topology and branch lengths. Additionally, this method is easily adapted to assess the consistency of potential calibrations to any one calibration in the candidate pool.  相似文献   

9.

Background

Data collected to inform time variations in natural population size are tainted by sampling error. Ignoring sampling error in population dynamics models induces bias in parameter estimators, e.g., density-dependence. In particular, when sampling errors are independent among populations, the classical estimator of the synchrony strength (zero-lag correlation) is biased downward. However, this bias is rarely taken into account in synchrony studies although it may lead to overemphasizing the role of intrinsic factors (e.g., dispersal) with respect to extrinsic factors (the Moran effect) in generating population synchrony as well as to underestimating the extinction risk of a metapopulation.

Methodology/Principal findings

The aim of this paper was first to illustrate the extent of the bias that can be encountered in empirical studies when sampling error is neglected. Second, we presented a space-state modelling approach that explicitly accounts for sampling error when quantifying population synchrony. Third, we exemplify our approach with datasets for which sampling variance (i) has been previously estimated, and (ii) has to be jointly estimated with population synchrony. Finally, we compared our results to those of a standard approach neglecting sampling variance. We showed that ignoring sampling variance can mask a synchrony pattern whatever its true value and that the common practice of averaging few replicates of population size estimates poorly performed at decreasing the bias of the classical estimator of the synchrony strength.

Conclusion/Significance

The state-space model used in this study provides a flexible way of accurately quantifying the strength of synchrony patterns from most population size data encountered in field studies, including over-dispersed count data. We provided a user-friendly R-program and a tutorial example to encourage further studies aiming at quantifying the strength of population synchrony to account for uncertainty in population size estimates.  相似文献   

10.
The interest in individualized medicines and upcoming or renewed regulatory requests to assess treatment effects in subgroups of confirmatory trials requires statistical methods that account for selection uncertainty and selection bias after having performed the search for meaningful subgroups. The challenge is to judge the strength of the apparent findings after mining the same data to discover them. In this paper, we describe a resampling approach that allows to replicate the subgroup finding process many times. The replicates are used to adjust the effect estimates for selection bias and to provide variance estimators that account for selection uncertainty. A simulation study provides some evidence of the performance of the method and an example from oncology illustrates its use.  相似文献   

11.

Background

Accumulating evidence indicates aberrant DNA methylation is involved in gastric tumourigenesis, suggesting it may be a useful clinical biomarker for the disease. The aim of this study was to consolidate and summarize published data on the potential of methylation in gastric cancer (GC) risk prediction, prognostication and prediction of treatment response.

Methods

Relevant studies were identified from PubMed using a systematic search approach. Results were summarized by meta-analysis. Mantel-Haenszel odds ratios were computed for each methylation event assuming the random-effects model.

Results

A review of 589 retrieved publications identified 415 relevant articles, including 143 case-control studies on gene methylation of 142 individual genes in GC clinical samples. A total of 77 genes were significantly differentially methylated between tumour and normal gastric tissue from GC subjects, of which data on 62 was derived from single studies. Methylation of 15, 4 and 7 genes in normal gastric tissue, plasma and serum respectively was significantly different in frequency between GC and non-cancer subjects. A prognostic significance was reported for 18 genes and predictive significance was reported for p16 methylation, although many inconsistent findings were also observed. No bias due to assay, use of fixed tissue or CpG sites analysed was detected, however a slight bias towards publication of positive findings was observed.

Conclusions

DNA methylation is a promising biomarker for GC risk prediction and prognostication. Further focused validation of candidate methylation markers in independent cohorts is required to develop its clinical potential.  相似文献   

12.
13.
Genomewide association studies are now a widely used approach in the search for loci that affect complex traits. After detection of significant association, estimates of penetrance and allele-frequency parameters for the associated variant indicate the importance of that variant and facilitate the planning of replication studies. However, when these estimates are based on the original data used to detect the variant, the results are affected by an ascertainment bias known as the "winner's curse." The actual genetic effect is typically smaller than its estimate. This overestimation of the genetic effect may cause replication studies to fail because the necessary sample size is underestimated. Here, we present an approach that corrects for the ascertainment bias and generates an estimate of the frequency of a variant and its penetrance parameters. The method produces a point estimate and confidence region for the parameter estimates. We study the performance of this method using simulated data sets and show that it is possible to greatly reduce the bias in the parameter estimates, even when the original association study had low power. The uncertainty of the estimate decreases with increasing sample size, independent of the power of the original test for association. Finally, we show that application of the method to case-control data can improve the design of replication studies considerably.  相似文献   

14.
Meta-analysis of genetic association studies   总被引:11,自引:0,他引:11  
Meta-analysis, a statistical tool for combining results across studies, is becoming popular as a method for resolving discrepancies in genetic association studies. Persistent difficulties in obtaining robust, replicable results in genetic association studies are almost certainly because genetic effects are small, requiring studies with many thousands of subjects to be detected. In this article, we describe how meta-analysis works and consider whether it will solve the problem of underpowered studies or whether it is another affliction visited by statisticians on geneticists. We show that meta-analysis has been successful in revealing unexpected sources of heterogeneity, such as publication bias. If heterogeneity is adequately recognized and taken into account, meta-analysis can confirm the involvement of a genetic variant, but it is not a substitute for an adequately powered primary study.  相似文献   

15.
16.
In diagnostic medicine, the volume under the receiver operating characteristic (ROC) surface (VUS) is a commonly used index to quantify the ability of a continuous diagnostic test to discriminate between three disease states. In practice, verification of the true disease status may be performed only for a subset of subjects under study since the verification procedure is invasive, risky, or expensive. The selection for disease examination might depend on the results of the diagnostic test and other clinical characteristics of the patients, which in turn can cause bias in estimates of the VUS. This bias is referred to as verification bias. Existing verification bias correction in three‐way ROC analysis focuses on ordinal tests. We propose verification bias‐correction methods to construct ROC surface and estimate the VUS for a continuous diagnostic test, based on inverse probability weighting. By applying U‐statistics theory, we develop asymptotic properties for the estimator. A Jackknife estimator of variance is also derived. Extensive simulation studies are performed to evaluate the performance of the new estimators in terms of bias correction and variance. The proposed methods are used to assess the ability of a biomarker to accurately identify stages of Alzheimer's disease.  相似文献   

17.
As more investigators conduct extensive whole-genome linkage scans for complex traits, interest is growing in meta-analysis as a way of integrating the weak or conflicting evidence from multiple studies. However, there is a bias in the most commonly used meta-analysis linkage technique (i.e., Fisher's [1925] method of combining of P values) when it is applied to many nonparametric (i.e., model free) linkage results. The bias arises in those methods (e.g., variance components, affected sib pair, extremely discordant sib pairs, etc.) that truncate all "negative evidence against linkage" into the single value of LOD = 0. If incorrectly handled, this bias can artificially inflate or deflate the combined meta-analysis linkage results for any given locus. This is an especially troublesome problem in the context of a genome scan, since LOD = 0 is expected to occur over half the unlinked genome. The bias can be overcome (nearly) completely by simply interpreting LOD = 0 as a P value of 1divided by 2ln(2) is approximately equal to .72 in Fisher's formula.  相似文献   

18.
Jackson D 《Biometrics》2007,63(1):187-193
Perhaps the greatest threat to the validity of a meta-analysis is the possibility of publication bias, where studies with interesting or statistically significant results are more likely to be published. This obviously impacts on inference concerning the treatment effect but also has implications for estimates of between-study variance. Two popular and established estimation methods are considered and formulae for assessing the implications of the bias are provided in terms of a general process for selecting studies. Meta-analysts, concerned that publication bias may be present, can use these as part of a sensitivity analysis to assess the robustness of their estimates of between-study variance using any selection process that is likely to be used in practice. The procedure is illustrated using a meta-analysis of clinical trials concerning the effectiveness of endoscopic sclerotherapy for preventing death in patients with cirrhosis and oesophagogastric varices.  相似文献   

19.
Proteomic biomarker discovery has led to the identification of numerous potential candidates for disease diagnosis, prognosis, and prediction of response to therapy. However, very few of these identified candidate biomarkers reach clinical validation and go on to be routinely used in clinical practice. One particular issue with biomarker discovery is the identification of significantly changing proteins in the initial discovery experiment that do not validate when subsequently tested on separate patient sample cohorts. Here, we seek to highlight some of the statistical challenges surrounding the analysis of LC‐MS proteomic data for biomarker candidate discovery. We show that common statistical algorithms run on data with low sample sizes can overfit and yield misleading misclassification rates and AUC values. A common solution to this problem is to prefilter variables (via, e.g. ANOVA and or use of correction methods such as Bonferonni or false discovery rate) to give a smaller dataset and reduce the size of the apparent statistical challenge. However, we show that this exacerbates the problem yielding even higher performance metrics while reducing the predictive accuracy of the biomarker panel. To illustrate some of these limitations, we have run simulation analyses with known biomarkers. For our chosen algorithm (random forests), we show that the above problems are substantially reduced if a sufficient number of samples are analyzed and the data are not prefiltered. Our view is that LC‐MS proteomic biomarker discovery data should be analyzed without prefiltering and that increasing the sample size in biomarker discovery experiments should be a very high priority.  相似文献   

20.
The bootstrap is a tool that allows for efficient evaluation of prediction performance of statistical techniques without having to set aside data for validation. This is especially important for high-dimensional data, e.g., arising from microarrays, because there the number of observations is often limited. For avoiding overoptimism the statistical technique to be evaluated has to be applied to every bootstrap sample in the same manner it would be used on new data. This includes a selection of complexity, e.g., the number of boosting steps for gradient boosting algorithms. Using the latter, we demonstrate in a simulation study that complexity selection in conventional bootstrap samples, drawn with replacement, is severely biased in many scenarios. This translates into a considerable bias of prediction error estimates, often underestimating the amount of information that can be extracted from high-dimensional data. Potential remedies for this complexity selection bias, such as alternatively using a fixed level of complexity or of using sampling without replacement are investigated and it is shown that the latter works well in many settings. We focus on high-dimensional binary response data, with bootstrap .632+ estimates of the Brier score for performance evaluation, and censored time-to-event data with .632+ prediction error curve estimates. The latter, with the modified bootstrap procedure, is then applied to an example with microarray data from patients with diffuse large B-cell lymphoma.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号