首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
    
Hierarchical models are recommended for meta-analyzing diagnostic test accuracy (DTA) studies. The bivariate random-effects model is currently widely used to synthesize a pair of test sensitivity and specificity using logit transformation across studies. This model assumes a bivariate normal distribution for the random-effects. However, this assumption is restrictive and can be violated. When the assumption fails, inferences could be misleading. In this paper, we extended the current bivariate random-effects model by assuming a flexible bivariate skew-normal distribution for the random-effects in order to robustly model logit sensitivities and logit specificities. The marginal distribution of the proposed model is analytically derived so that parameter estimation can be performed using standard likelihood methods. The method of weighted-average is adopted to estimate the overall logit-transformed sensitivity and specificity. An extensive simulation study is carried out to investigate the performance of the proposed model compared to other standard models. Overall, the proposed model performs better in terms of confidence interval width of the average logit-transformed sensitivity and specificity compared to the standard bivariate linear mixed model and bivariate generalized linear mixed model. Simulations have also shown that the proposed model performed better than the well-established bivariate linear mixed model in terms of bias and comparable with regards to the root mean squared error (RMSE) of the between-study (co)variances. The proposed method is also illustrated using a published meta-analysis data.  相似文献   

2.
    
The development of methods for the meta-analysis of diagnostic test accuracy (DTA) studies is still an active area of research. While methods for the standard case where each study reports a single pair of sensitivity and specificity are nearly routinely applied nowadays, methods to meta-analyze receiver operating characteristic (ROC) curves are not widely used. This situation is more complex, as each primary DTA study may report on several pairs of sensitivity and specificity, each corresponding to a different threshold. In a case study published earlier, we applied a number of methods for meta-analyzing DTA studies with multiple thresholds to a real-world data example (Zapf et al., Biometrical Journal. 2021; 63(4): 699–711). To date, no simulation study exists that systematically compares different approaches with respect to their performance in various scenarios when the truth is known. In this article, we aim to fill this gap and present the results of a simulation study that compares three frequentist approaches for the meta-analysis of ROC curves. We performed a systematic simulation study, motivated by an example from medical research. In the simulations, all three approaches worked partially well. The approach by Hoyer and colleagues was slightly superior in most scenarios and is recommended in practice.  相似文献   

3.
    
Stepped wedge designed trials are a type of cluster-randomized study in which the intervention is introduced to each cluster in a random order over time. This design is often used to assess the effect of a new intervention as it is rolled out across a series of clinics or communities. Based on a permutation argument, we derive a closed-form expression for an estimate of the intervention effect, along with its standard error, for a stepped wedge design trial. We show that these estimates are robust to misspecification of both the mean and covariance structure of the underlying data-generating mechanism, thereby providing a robust approach to inference for the intervention effect in stepped wedge designs. We use simulations to evaluate the type 1 error and power of the proposed estimate and to compare the performance of the proposed estimate to the optimal estimate when the correct model specification is known. The limitations, possible extensions, and open problems regarding the method are discussed.  相似文献   

4.
    
Meta-analysis of binary data is challenging when the event under investigation is rare, and standard models for random-effects meta-analysis perform poorly in such settings. In this simulation study, we investigate the performance of different random-effects meta-analysis models in terms of point and interval estimation of the pooled log odds ratio in rare events meta-analysis. First and foremost, we evaluate the performance of a hypergeometric-normal model from the family of generalized linear mixed models (GLMMs), which has been recommended, but has not yet been thoroughly investigated for rare events meta-analysis. Performance of this model is compared to performance of the beta-binomial model, which yielded favorable results in previous simulation studies, and to the performance of models that are frequently used in rare events meta-analysis, such as the inverse variance model and the Mantel–Haenszel method. In addition to considering a large number of simulation parameters inspired by real-world data settings, we study the comparative performance of the meta-analytic models under two different data-generating models (DGMs) that have been used in past simulation studies. The results of this study show that the hypergeometric-normal GLMM is useful for meta-analysis of rare events when moderate to large heterogeneity is present. In addition, our study reveals important insights with regard to the performance of the beta-binomial model under different DGMs from the binomial-normal family. In particular, we demonstrate that although misalignment of the beta-binomial model with the DGM affects its performance, it shows more robustness to the DGM than its competitors.  相似文献   

5.
    
Interference occurs between individuals when the treatment (or exposure) of one individual affects the outcome of another individual. Previous work on causal inference methods in the presence of interference has focused on the setting where it is a priori assumed that there is “partial interference,” in the sense that individuals can be partitioned into groups wherein there is no interference between individuals in different groups. Bowers et al. (2012, Political Anal, 21, 97–124) and Bowers et al. (2016, Political Anal, 24, 395–403) consider randomization-based inferential methods that allow for more general interference structures in the context of randomized experiments. In this paper, extensions of Bowers et al. that allow for failure time outcomes subject to right censoring are proposed. Permitting right-censored outcomes is challenging because standard randomization-based tests of the null hypothesis of no treatment effect assume that whether an individual is censored does not depend on treatment. The proposed extension of Bowers et al. to allow for censoring entails adapting the method of Wang et al. (2010, Biostatistics, 11, 676–692) for two-sample survival comparisons in the presence of unequal censoring. The methods are examined via simulation studies and utilized to assess the effects of cholera vaccination in an individually randomized trial of 73 000 children and women in Matlab, Bangladesh.  相似文献   

6.
    
Functional data are smooth, often continuous, random curves, which can be seen as an extreme case of multivariate data with infinite dimensionality. Just as componentwise inference for multivariate data naturally performs feature selection, subsetwise inference for functional data performs domain selection. In this paper, we present a unified testing framework for domain selection on populations of functional data. In detail, p-values of hypothesis tests performed on pointwise evaluations of functional data are suitably adjusted for providing control of the familywise error rate (FWER) over a family of subsets of the domain. We show that several state-of-the-art domain selection methods fit within this framework and differ from each other by the choice of the family over which the control of the FWER is provided. In the existing literature, these families are always defined a priori. In this work, we also propose a novel approach, coined thresholdwise testing, in which the family of subsets is instead built in a data-driven fashion. The method seamlessly generalizes to multidimensional domains in contrast to methods based on a priori defined families. We provide theoretical results with respect to consistency and control of the FWER for the methods within the unified framework. We illustrate the performance of the methods within the unified framework on simulated and real data examples and compare their performance with other existing methods.  相似文献   

7.
8.
    
Pfeiffer RM  Ryan L  Litonjua A  Pee D 《Biometrics》2005,61(4):982-991
The case-cohort design for longitudinal data consists of a subcohort sampled at the beginning of the study that is followed repeatedly over time, and a case sample that is ascertained through the course of the study. Although some members in the subcohort may experience events over the study period, we refer to it as the \"control-cohort.\" The case sample is a random sample of subjects not in the control-cohort, who have experienced at least one event during the study period. Different correlations among repeated observations on the same individual are accommodated by a two-level random-effects model. This design allows consistent estimation of all parameters estimable in a cohort design and is a cost-effective way to study the effects of covariates on repeated observations of relatively rare binary outcomes when exposure assessment is expensive. It is an extension of the case-cohort design (Prentice, 1986, Biometrika73, 1-11) and the bidirectional case-crossover design (Navidi, 1998, Biometrics54, 596-605). A simulation study compares the efficiency of the longitudinal case-cohort design to a full cohort analysis, and we find that in certain situations up to 90% efficiency can be obtained with half the sample size required for a full cohort analysis. A bootstrap method is presented that permits testing for intra-subject homogeneity in the presence of unidentifiable nuisance parameters in the two-level random-effects model. As an illustration we apply the design to data from an ongoing study of childhood asthma.  相似文献   

9.
10.
Multivariate meta-analysis models can be used to synthesize multiple, correlated endpoints such as overall and disease-free survival. A hierarchical framework for multivariate random-effects meta-analysis includes both within-study and between-study correlation. The within-study correlations are assumed known, but they are usually unavailable, which limits the multivariate approach in practice. In this paper, we consider synthesis of 2 correlated endpoints and propose an alternative model for bivariate random-effects meta-analysis (BRMA). This model maintains the individual weighting of each study in the analysis but includes only one overall correlation parameter, rho, which removes the need to know the within-study correlations. Further, the only data needed to fit the model are those required for a separate univariate random-effects meta-analysis (URMA) of each endpoint, currently the common approach in practice. This makes the alternative model immediately applicable to a wide variety of evidence synthesis situations, including studies of prognosis and surrogate outcomes. We examine the performance of the alternative model through analytic assessment, a realistic simulation study, and application to data sets from the literature. Our results show that, unless rho is very close to 1 or -1, the alternative model produces appropriate pooled estimates with little bias that (i) are very similar to those from a fully hierarchical BRMA model where the within-study correlations are known and (ii) have better statistical properties than those from separate URMAs, especially given missing data. The alternative model is also less prone to estimation at parameter space boundaries than the fully hierarchical model and thus may be preferred even when the within-study correlations are known. It also suitably estimates a function of the pooled estimates and their correlation; however, it only provides an approximate indication of the between-study variation. The alternative model greatly facilitates the utilization of correlation in meta-analysis and should allow an increased application of BRMA in practice.  相似文献   

11.
12.
    
Unipolar major depressive disorder (MDD) is a prevalent, disabling condition with multiple genetic and environmental factors impacting disease risk. The diagnosis of MDD relies on a cumulative measure derived from multiple trait dimensions and alone is limited in elucidating MDD genetic determinants. We and others have proposed that MDD may be better dissected using paradigms that assess how specific genes associate with component features of MDD. This within-disease design requires both a well-phenotyped cohort and a robust statistical approach that retains power with multiple tests of genetic association. In the present study, common polymorphic variants of genes related to central monoaminergic and cholinergic pathways that previous studies align with functional change in vitro or depression associations in vivo were genotyped in 110 individuals with unipolar MDD. Subphenotypic characteristics were examined using responses to individual items assessed with the Structured Clinical Interview for Diagnostic and Statistical Manual of Mental Disorders (DSM IV), the 17-item Hamilton Rating Scale for Depression (HAM-D) and the NEO Five Factor Inventory. Multivariate Permutation Testing (MPT) was used to infer genotype-phenotype relationships underlying dimensional findings within clinical categories. MPT analyses show significant associations of the norepinephrine transporter (NET, SLC6A2) -182 T/C (rs2242446) with recurrent depression [odds ratio, OR = 4.15 (1.91-9.02)], NET -3081 A/T (rs28386840) with increase in appetite [OR = 3.58 (1.53-8.39)] and the presynaptic choline transporter (CHT, SLC5A7) Ile89Val (rs1013940) with HAM-D-17 total score {i.e. overall depression severity [OR = 2.74 (1.05-7.18)]}. These relationships illustrate an approach to the elucidation of gene influences on trait components of MDD and with replication, may help identify MDD subpopulations that can benefit from more targeted pharmacotherapy.  相似文献   

13.
Microsatellites are used to unravel the fine-scale genetic structure of a hybrid zone between chromosome races Valais and Cordon of the common shrew ( Sorex araneus ) located in the French Alps. A total of 269 individuals collected between 1992 and 1995 was typed for seven microsatellite loci. A modified version of the classical multiple correspondence analysis is carried out. This analysis clearly shows the dichotomy between the two races. Several approaches are used to study genetic structuring. Gene flow is clearly reduced between these chromosome races and is estimated at one migrant every two generations using R -statistics and one migrant per generation using F -statistics. Hierarchical F - and R -statistics are compared and their efficiency to detect inter- and intraracial patterns of divergence is discussed. Within-race genetic structuring is significant, but remains weak. F ST displays similar values on both sides of the hybrid zone, although no environmental barriers are found on the Cordon side, whereas the Valais side is divided by several mountain rivers. We introduce the exact G -test to microsatellite data which proved to be a powerful test to detect genetic differentiation within as well as among races. The genetic background of karyotypic hybrids was compared with the genetic background of pure parental forms using a CRT–MCA. Our results indicate that, without knowledge of the karyotypes, we would not have been able to distinguish these hybrids from karyotypically pure samples.  相似文献   

14.
    
Pooling the relative risk (RR) across studies investigating rare events, for example, adverse events, via meta-analytical methods still presents a challenge to researchers. The main reason for this is the high probability of observing no events in treatment or control group or both, resulting in an undefined log RR (the basis of standard meta-analysis). Other technical challenges ensue, for example, the violation of normality assumptions, or bias due to exclusion of studies and application of continuity corrections, leading to poor performance of standard approaches. In the present simulation study, we compared three recently proposed alternative models (random-effects [RE] Poisson regression, RE zero-inflated Poisson [ZIP] regression, binomial regression) to the standard methods in conjunction with different continuity corrections and to different versions of beta-binomial regression. Based on our investigation of the models' performance in 162 different simulation settings informed by meta-analyses from the Cochrane database and distinguished by different underlying true effects, degrees of between-study heterogeneity, numbers of primary studies, group size ratios, and baseline risks, we recommend the use of the RE Poisson regression model. The beta-binomial model recommended by Kuss (2015) also performed well. Decent performance was also exhibited by the ZIP models, but they also had considerable convergence issues. We stress that these recommendations are only valid for meta-analyses with larger numbers of primary studies. All models are applied to data from two Cochrane reviews to illustrate differences between and issues of the models. Limitations as well as practical implications and recommendations are discussed; a flowchart summarizing recommendations is provided.  相似文献   

15.
The aim of the present study was to identify specific markers that mirror liver fibrosis progression as an alternative to biopsy when biopsy is contraindicated, especially in children. After liver biopsies were performed, serum samples from 30 hepatitis C virus (HCV) paediatric patients (8-14 years) were analysed and compared with samples from 30 healthy subjects. All subjects were tested for the presence of serum anti-HCV antibodies. Direct biomarkers for liver fibrosis, including transforming growth factor-β1, tissue inhibitor of matrix metalloproteinase-1 (TIMP-1), hyaluronic acid (HA), procollagen type III amino-terminal peptide (PIIINP) and osteopontin (OPN), were measured. The indirect biomarkers aspartate and alanine aminotransferases, albumin and bilirubin were also tested. The results revealed a significant increase in the serum marker levels in HCV-infected children compared with the healthy group, whereas albumin levels exhibited a significant decrease. Significantly higher levels of PIIINP, TIMP-1, OPN and HA were detected in HCV-infected children with moderate to severe fibrosis compared with children with mild fibrosis (p < 0.05). The diagnostic accuracy of these direct biomarkers, represented by sensitivity, specificity and positive predictive value, emphasises the utility of PIIINP, TIMP-1, OPN and HA as indicators of liver fibrosis among HCV-infected children.  相似文献   

16.
    
The meta‐analysis of diagnostic accuracy studies is often of interest in screening programs for many diseases. The typical summary statistics for studies chosen for a diagnostic accuracy meta‐analysis are often two dimensional: sensitivities and specificities. The common statistical analysis approach for the meta‐analysis of diagnostic studies is based on the bivariate generalized linear‐mixed model (BGLMM), which has study‐specific interpretations. In this article, we present a population‐averaged (PA) model using generalized estimating equations (GEE) for making inference on mean specificity and sensitivity of a diagnostic test in the population represented by the meta‐analytic studies. We also derive the marginalized counterparts of the regression parameters from the BGLMM. We illustrate the proposed PA approach through two dataset examples and compare performance of estimators of the marginal regression parameters from the PA model with those of the marginalized regression parameters from the BGLMM through Monte Carlo simulation studies. Overall, both marginalized BGLMM and GEE with sandwich standard errors maintained nominal 95% confidence interval coverage levels for mean specificity and mean sensitivity in meta‐analysis of 25 of more studies even under misspecification of the covariance structure of the bivariate positive test counts for diseased and nondiseased subjects.  相似文献   

17.
    
Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta‐analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance‐stabilizing transformations: the arcsine square root and the Freeman–Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets.  相似文献   

18.
    
Accurate estimation of human immunodeficiency virus (HIV) incidence rates is crucial for the monitoring of HIV epidemics, the evaluation of prevention programs, and the design of prevention studies. Traditional cohort approaches to measure HIV incidence require repeatedly testing large cohorts of HIV‐uninfected individuals with an HIV diagnostic test (eg, enzyme‐linked immunosorbent assay) for long periods of time to identify new infections, which can be prohibitively costly, time‐consuming, and subject to loss to follow‐up. Cross‐sectional approaches based on the usual HIV diagnostic test and biomarkers of recent infection offer important advantages over standard cohort approaches, in terms of time, cost, and attrition. Cross‐sectional samples usually consist of individuals from different communities. However, small sample sizes limit the ability to estimate community‐specific incidence and existing methods typically ignore heterogeneity in incidence across communities. We propose a permutation test for the null hypothesis of no heterogeneity in incidence rates across communities, develop a random‐effects model to account for this heterogeneity and to estimate community‐specific incidence, and provide one way to estimate the coefficient of variation. We evaluate the performance of the proposed methods through simulation studies and apply them to the data from the National Institute of Mental Health Project ACCEPT, a phase 3 randomized controlled HIV prevention trial in Sub‐Saharan Africa, to estimate the overall and community‐specific HIV incidence rates.  相似文献   

19.
    
Recent reports on livestock environmental impact based on life cycle assessment (LCA) did not fully consider the case of the dairy goat. Assignment of an environmental impact (e.g. global warming potential) to a specific product needs to be related to the appropriate ‘unitary amount’ or functional unit (FU). For milk, the energy content may provide a common basis for a definition of the FU. To date, no ad hoc formulations for the FU of goat milk have been proposed. For these reasons, this study aimed to develop and test one or more predictive models (DPMs) for the gross energy (GE) content of goat milk, based on published compositional data, such as fat (F), protein, total solids (TS), solid non-fat matter (SNF), lactose (Lac) and ash. The DPMs were developed, selected and tested using a linear regression approach, as a meta-analysis (i.e. meta-regression) was not applicable. However, in the final stage, a control procedure for spurious findings was carried out using a Monte Carlo permutation test. Because several published predictive models (PPMs) for GE in cow milk and goat milk were found in the literature, they were tested on the same data set with which the DPMs were developed. The best-performing DPMs and PPMs were compared directly with a subset of the individual data retrieved from the literature. Overall, the paucity of direct measurements of the GE in goat milk was a limiting factor in collecting data from the literature; thus, only a small data set (n=26) was established, even though it was considered sufficiently representative of milks from different goat breeds. The three best PPMs based on F alone gave more biased estimates of the GE content of the goat milk than the three new DPMs based on F, F and SNF and F and TS, respectively. Accordingly, three different formulations of FU are proposed, depending on the availability of data including both F and TS (or F and SNF) or F alone. Even though several metrics can be used in defining the FU for milk to be used in LCAs of goat farming systems, the proposed FU formulations should be adopted in place of the similar energy-based ones developed for other dairy species.  相似文献   

20.
    
Proschan MA  Nason M 《Biometrics》2009,65(1):316-322
Summary .  Two-by-two tables arise in a number of diverse settings in biomedical research, including analysis of data from a clinical trial with a binary outcome and gating methods in flow cytometry to separate antigen-specific immune responses from general immune responses. These applications offer interesting challenges concerning what we should really be conditioning on—the total number of events, the number of events in the control condition, etc. We give several biostatistics examples to illustrate the complexities of analyzing what appear to be simple data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号