首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Evidence synthesis, both qualitatively and quantitatively through meta-analysis, is central to the development of evidence-based medicine. Unfortunately, meta-analysis is often complicated by the suspicion that the available studies represent a biased subset of the evidence, possibly due to publication bias or other systematically different effects in small studies. A number of statistical methods have been proposed to address this, among which the trim-and-fill method and the Copas selection model are two of the most widely discussed. However, both methods have drawbacks: the trim-and-fill method is based on strong assumptions about the symmetry of the funnel plot; the Copas selection model is less accessible to systematic reviewers, and sometimes encounters estimation problems. In this article, we adopt a logistic selection model, and show how treatment effects can be rapidly estimated via multiple imputation. Specifically, we impute studies under a missing at random assumption, and then reweight to obtain estimates under nonrandom selection. Our proposal is computationally straightforward. It allows users to increase selection while monitoring the extent of remaining funnel plot asymmetry, and also visualize the results using the funnel plot. We illustrate our approach using a small meta-analysis of benign prostatic hyperplasia.  相似文献   

2.
Systematic reviews that collate data about the relative effects of multiple interventions via network meta-analysis are highly informative for decision-making purposes. A network meta-analysis provides two types of findings for a specific outcome: the relative treatment effect for all pairwise comparisons, and a ranking of the treatments. It is important to consider the confidence with which these two types of results can enable clinicians, policy makers and patients to make informed decisions. We propose an approach to determining confidence in the output of a network meta-analysis. Our proposed approach is based on methodology developed by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Working Group for pairwise meta-analyses. The suggested framework for evaluating a network meta-analysis acknowledges (i) the key role of indirect comparisons (ii) the contributions of each piece of direct evidence to the network meta-analysis estimates of effect size; (iii) the importance of the transitivity assumption to the validity of network meta-analysis; and (iv) the possibility of disagreement between direct evidence and indirect evidence. We apply our proposed strategy to a systematic review comparing topical antibiotics without steroids for chronically discharging ears with underlying eardrum perforations. The proposed framework can be used to determine confidence in the results from a network meta-analysis. Judgements about evidence from a network meta-analysis can be different from those made about evidence from pairwise meta-analyses.  相似文献   

3.
4.
A general methodology is presented for the modeling, simulation, design, evaluation, and statistical analysis of (13)C-labeling experiments for metabolic flux analysis. The universal software framework 13C-FLUX was implemented to support all steps of this process. Guided by the example of anaplerotic flux determination in Corynebacterium glutamicum, the technical details of the model setup, experimental design, and data evaluation are discussed. It is shown how the network structure, the input substrate composition, the assumptions about fluxes, and the measurement configuration are specified within 13C-FLUX. Based on the network model, different experimental designs are computed depending on the goal of the investigations. Finally, a specific experiment is evaluated and the various statistical methods used to analyze the results are briefly explained. The appendix gives some details about the software implementation and availability.  相似文献   

5.
The immunocompetence handicap hypothesis was formulated 12 years ago in an attempt to offer a proximate mechanism by which female choice of males could be explained by endocrine control of honest signalling. The hypothesis suggested that testosterone has a dual effect in males of controlling the development of sexual signals while causing immunosuppression. Our purpose in this review is to examine the empirical evidence to date that has attempted to test the hypothesis, and to conduct a meta-analysis on two of the assumptions of the hypothesis, that testosterone reduces immunocompetence and increases parasitism, to ascertain any statistical trend in the data. There is some evidence to suggest that testosterone is responsible for the magnitude of trait expression or development of sexual traits, but this is by no means conclusive. The results of many studies attempting to find evidence for the supposed immunosuppressive qualities of testosterone are difficult to interpret since they are observational rather than experimental. Of the experimental studies, the data obtained are ambiguous, and this is reflected in the result of the meta-analysis. Overall, the meta-analysis found a significant suppressive effect of testosterone on immunity, in support of the hypothesis, but this effect disappeared when we controlled for multiple studies on the same species. There was no effect of testosterone on direct measures of immunity, but it did increase ectoparasite abundance in several studies, in particular in reptiles. A funnel analysis indicated that the results were robust to a publication bias. Alternative substances that interact with testosterone, such as glucocorticoids, may be important. Ultimately, a greater understanding is required of the complex relationships that exist both within and between the endocrine and immune systems and their consequences for mate choice decision making.  相似文献   

6.
Since hub nodes have been found to play important roles in many networks, highly connected hub genes are expected to play an important role in biology as well. However, the empirical evidence remains ambiguous. An open question is whether (or when) hub gene selection leads to more meaningful gene lists than a standard statistical analysis based on significance testing when analyzing genomic data sets (e.g., gene expression or DNA methylation data). Here we address this question for the special case when multiple genomic data sets are available. This is of great practical importance since for many research questions multiple data sets are publicly available. In this case, the data analyst can decide between a standard statistical approach (e.g., based on meta-analysis) and a co-expression network analysis approach that selects intramodular hubs in consensus modules. We assess the performance of these two types of approaches according to two criteria. The first criterion evaluates the biological insights gained and is relevant in basic research. The second criterion evaluates the validation success (reproducibility) in independent data sets and often applies in clinical diagnostic or prognostic applications. We compare meta-analysis with consensus network analysis based on weighted correlation network analysis (WGCNA) in three comprehensive and unbiased empirical studies: (1) Finding genes predictive of lung cancer survival, (2) finding methylation markers related to age, and (3) finding mouse genes related to total cholesterol. The results demonstrate that intramodular hub gene status with respect to consensus modules is more useful than a meta-analysis p-value when identifying biologically meaningful gene lists (reflecting criterion 1). However, standard meta-analysis methods perform as good as (if not better than) a consensus network approach in terms of validation success (criterion 2). The article also reports a comparison of meta-analysis techniques applied to gene expression data and presents novel R functions for carrying out consensus network analysis, network based screening, and meta analysis.  相似文献   

7.
Preferential attachment is a stochastic process that has been proposed to explain certain topological features characteristic of complex networks from diverse domains. The systematic investigation of preferential attachment is an important area of research in network science, not only for the theoretical matter of verifying whether this hypothesized process is operative in real-world networks, but also for the practical insights that follow from knowledge of its functional form. Here we describe a maximum likelihood based estimation method for the measurement of preferential attachment in temporal complex networks. We call the method PAFit, and implement it in an R package of the same name. PAFit constitutes an advance over previous methods primarily because we based it on a nonparametric statistical framework that enables attachment kernel estimation free of any assumptions about its functional form. We show this results in PAFit outperforming the popular methods of Jeong and Newman in Monte Carlo simulations. What is more, we found that the application of PAFit to a publically available Flickr social network dataset yielded clear evidence for a deviation of the attachment kernel from the popularly assumed log-linear form. Independent of our main work, we provide a correction to a consequential error in Newman’s original method which had evidently gone unnoticed since its publication over a decade ago.  相似文献   

8.
Controversy over claims of cultures in nonhuman primates and other animals has led to a call for quantitative methods that are able to infer social learning from freely interacting groups of animals. Network-based diffusion analysis (NBDA) is such a method that infers social transmission of a behavioral trait when the pattern of acquisition follows the social network. As, relative to other animals, primates may be unusual in their heavy reliance on social learning, with learning frequently directed along pathways of association; in this study, we draw attention to the significance of this method for primatologists. We provide a "users guide" to NBDA methodology, discussing the choice of NBDA model and social network, and suggest model selection procedures. We also present the results of simulations that suggest that NBDA works well even when the assumptions of the underlying model are violated.  相似文献   

9.
There is heightened interest in using next-generation sequencing technologies to identify rare variants that influence complex human diseases and traits. Meta-analysis is essential to this endeavor because large sample sizes are required for detecting associations with rare variants. In this article, we provide a comprehensive overview of statistical methods for meta-analysis of sequencing studies for discovering rare-variant associations. Specifically, we discuss the calculation of relevant summary statistics from participating studies, the construction of gene-level association tests, the choice of transformation for quantitative traits, the use of fixed-effects versus random-effects models, and the removal of shadow association signals through conditional analysis. We also show that meta-analysis based on properly calculated summary statistics is as powerful as joint analysis of individual-participant data. In addition, we demonstrate the performance of different meta-analysis methods by using both simulated and empirical data. We then compare four major software packages for meta-analysis of rare-variant associations—MASS, RAREMETAL, MetaSKAT, and seqMeta—in terms of the underlying statistical methodology, analysis pipeline, and software interface. Finally, we present PreMeta, a software interface that integrates the four meta-analysis packages and allows a consortium to combine otherwise incompatible summary statistics.  相似文献   

10.
11.
Baker R  Jackson D 《Biometrics》2006,62(3):785-792
Publication bias of the results of medical studies can invalidate evidence-based medicine. The existing methodology for modeling this essentially relies upon the symmetry of the funnel plot. We present a new method of modeling publication bias that uses this information plus the impact factors of the publishing journals. A simple model of the publication process enables the estimation of bias-corrected intervention effects. The procedure is illustrated using a meta-analysis of the effectiveness of single-dose oral aspirin for acute pain, and results are also obtained for five other meta-analyses. The method enables the fitting of a wide range of models and is considered more flexible than other ways of compensating for publication bias. The model also provides the basis of a statistical test for the existence of publication bias. Use of the new methodology to supplement existing methods is recommended, in the context of a sensitivity analysis.  相似文献   

12.
We give an analysis of performance in an artificial neural network for which the claim had been made that it could learn abstract representations. Our argument is that this network is associative in nature, and cannot develop abstract representations. The network thus converges to a solution that is solely based on the statistical regularities of the training set. Inspired by human experiments that have shown that humans can engage in both associative (statistical) and abstract learning, we present a new, hybrid computational model that combines associative and more abstract, cognitive processes. To cross-validate the model we attempted to predict human behaviour in further experiments. One of these experiments reveals some evidence for the use of abstract representations, whereas the others provide evidence for associatively based performance. The predictions of the hybrid model stand in line with our empirical data.  相似文献   

13.
Ibrahim JG  Chen MH  Xia HA  Liu T 《Biometrics》2012,68(2):578-586
Recent guidance from the Food and Drug Administration for the evaluation of new therapies in the treatment of type 2 diabetes (T2DM) calls for a program-wide meta-analysis of cardiovascular (CV) outcomes. In this context, we develop a new Bayesian meta-analysis approach using survival regression models to assess whether the size of a clinical development program is adequate to evaluate a particular safety endpoint. We propose a Bayesian sample size determination methodology for meta-analysis clinical trial design with a focus on controlling the type I error and power. We also propose the partial borrowing power prior to incorporate the historical survival meta data into the statistical design. Various properties of the proposed methodology are examined and an efficient Markov chain Monte Carlo sampling algorithm is developed to sample from the posterior distributions. In addition, we develop a simulation-based algorithm for computing various quantities, such as the power and the type I error in the Bayesian meta-analysis trial design. The proposed methodology is applied to the design of a phase 2/3 development program including a noninferiority clinical trial for CV risk assessment in T2DM studies.  相似文献   

14.
Clinical trials are typically designed with an aim to reach sufficient power to test a hypothesis about relative effectiveness of two or more interventions. Their role in informing evidence‐based decision‐making demands, however, that they are considered in the context of the existing evidence. Consequently, their planning can be informed by characteristics of relevant systematic reviews and meta‐analyses. In the presence of multiple competing interventions the evidence base has the form of a network of trials, which provides information not only about the required sample size but also about the interventions that should be compared in a future trial. In this paper we present a methodology to evaluate the impact of new studies, their information size, the comparisons involved, and the anticipated heterogeneity on the conditional power (CP) of the updated network meta‐analysis. The methods presented are an extension of the idea of CP initially suggested for a pairwise meta‐analysis and we show how to estimate the required sample size using various combinations of direct and indirect evidence in future trials. We apply the methods to two previously published networks and we show that CP for a treatment comparison is dependent on the magnitude of heterogeneity and the ratio of direct to indirect information in existing and future trials for that comparison. Our methodology can help investigators calculate the required sample size under different assumptions about heterogeneity and make decisions about the number and design of future studies (set of treatments compared).  相似文献   

15.
Todem D  Hsu WW  Kim K 《Biometrics》2012,68(3):975-982
Summary In many applications of two-component mixture models for discrete data such as zero-inflated models, it is often of interest to conduct inferences for the mixing weights. Score tests derived from the marginal model that allows for negative mixing weights have been particularly useful for this purpose. But the existing testing procedures often rely on restrictive assumptions such as the constancy of the mixing weights and typically ignore the structural constraints of the marginal model. In this article, we develop a score test of homogeneity that overcomes the limitations of existing procedures. The technique is based on a decomposition of the mixing weights into terms that have an obvious statistical interpretation. We exploit this decomposition to lay the foundation of the test. Simulation results show that the proposed covariate-adjusted test statistic can greatly improve the efficiency over test statistics based on constant mixing weights. A real-life example in dental caries research is used to illustrate the methodology.  相似文献   

16.
Allowing for imprecision of radiation dose estimates for A-bomb survivors followed up by the Radiation Effects Research Foundation can be improved through recent statistical methodology. Since the entire RERF dosimetry system has recently been revised, it is timely to reconsider this. We have found that the dosimetry revision itself does not warrant changes in these methods but that the new methodology does. In addition to assumptions regarding the form and magnitude of dose estimation errors, previous and current methods involve the apparent distribution of true doses in the cohort. New formulas give results conveniently and explicitly in terms of these inputs. Further, it is now possible to use assumptions about two components of the dose errors, referred to in the statistical literature as "classical" and "Berkson-type". There are indirect statistical indications, involving non-cancer biological effects, that errors may be somewhat larger than assumed before, in line with recommendations made here. Inevitably, methods must rely on uncertain assumptions about the magnitude of dose errors, and it is comforting to find that, within the range of plausibility, eventual cancer risk estimates are not very sensitive to these.  相似文献   

17.
Experimental design and statistical analysis of data for predator preferences towards different types of prey have been problematic for several reasons. In addition to fundamental issues concerning the definition of preference, traditional statistical issues such as the appropriateness of statistical distributions such as the Binomial distribution, pseudo-replication, and the appropriate conditioning of probabilities have hindered progress on this important topic in ecology. This paper discusses these issues in the context of the methodology proposed by Underwood and Clarke [Underwood, A.J., Clarke, K.R., 2005. Solving some statistical problems in analyses of experiments on choices of food and on associations with habitat. J. Exp. Mar. Biol. Ecol. 318, 227-237.] in order to provide further clarity concerning the assumptions of this approach and therefore its applicability. In light of the difficulty justifying the validity of these assumptions in practice, an alternative approach is presented which has simpler statistical assumptions.  相似文献   

18.

Background

The indirect comparison of two interventions can be valuable in many situations. However, the quality of an indirect comparison will depend on several factors including the chosen methodology and validity of underlying assumptions. Published indirect comparisons are increasingly more common in the medical literature, but as yet, there are no published recommendations of how they should be reported. Our aim is to systematically review the quality of published indirect comparisons to add to existing empirical data suggesting that improvements can be made when reporting and applying indirect comparisons.

Methodology/Findings

Reviews applying statistical methods to indirectly compare the clinical effectiveness of two interventions using randomised controlled trials were eligible. We searched (1966–2008) Database of Abstracts and Reviews of Effects, The Cochrane library, and Medline. Full review publications were assessed for eligibility. Specific criteria to assess quality were developed and applied. Forty-three reviews were included. Adequate methodology was used to calculate the indirect comparison in 41 reviews. Nineteen reviews assessed the similarity assumption using sensitivity analysis, subgroup analysis, or meta-regression. Eleven reviews compared trial-level characteristics. Twenty-four reviews assessed statistical homogeneity. Twelve reviews investigated causes of heterogeneity. Seventeen reviews included direct and indirect evidence for the same comparison; six reviews assessed consistency. One review combined both evidence types. Twenty-five reviews urged caution in interpretation of results, and 24 reviews indicated when results were from indirect evidence by stating this term with the result.

Conclusions

This review shows that the underlying assumptions are not routinely explored or reported when undertaking indirect comparisons. We recommend, therefore, that the quality of indirect comparisons should be improved, in particular, by assessing assumptions and reporting the assessment methods applied. We propose that the quality criteria applied in this article may provide a basis to help review authors carry out indirect comparisons and to aid appropriate interpretation.  相似文献   

19.
20.
Using 56 adult dental diameters as a subsystem model for craniofacial development, we show that monozygotic (MZ), dizygotic (DZ), and singleton groups differ significantly in developmental relationships assessed by multivariate statistical methods under commonly accepted assumptions. Given the differences observed, we suggest that any assumption of developmental equivalence between MZ and DZ twins, or between twins of either group and singletons, for variables of craniofacial or behavioral development, may be subject to serious doubt. Implications for twin study theory and methodology, and for study of early human development, are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号