首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Susan Murray 《Biometrics》2001,57(2):361-368
This research introduces methods for nonparametric testing of weighted integrated survival differences in the context of paired censored survival designs. The current work extends work done by Pepe and Fleming (1989, Biometrics 45, 497-507), which considered similar test statistics directed toward independent treatment group comparisons. An asymptotic closed-form distribution of the proposed family of tests is presented, along with variance estimates constructed under null and alternative hypotheses using nonparametric maximum likelihood estimates of the closed-form quantities. The described method allows for additional information from individuals with no corresponding matched pair member to be incorporated into the test statistic in sampling scenarios where singletons are not prone to selection bias. Simulations presented over a range of potential dependence in the paired censored survival data demonstrate substantial power gains associated with taking into account the dependence structure. Consequences of ignoring the paired nature of the data include overly conservative tests in terms of power and size. In fact, simulation results using tests for independent samples in the presence of positive correlation consistently undershot both size and power targets that would have been attained in the absence of correlation. This additional worrisome effect on operating characteristics highlights the need for accounting for dependence in this popular family of tests.  相似文献   

2.
Chan KC  Wang MC 《Biometrics》2012,68(2):521-531
A prevalent sample consists of individuals who have experienced disease incidence but not failure event at the sampling time. We discuss methods for estimating the distribution function of a random vector defined at baseline for an incident disease population when data are collected by prevalent sampling. Prevalent sampling design is often more focused and economical than incident study design for studying the survival distribution of a diseased population, but prevalent samples are biased by design. Subjects with longer survival time are more likely to be included in a prevalent cohort, and other baseline variables of interests that are correlated with survival time are also subject to sampling bias induced by the prevalent sampling scheme. Without recognition of the bias, applying empirical distribution function to estimate the population distribution of baseline variables can lead to serious bias. In this article, nonparametric and semiparametric methods are developed for distribution estimation of baseline variables using prevalent data.  相似文献   

3.
Important scientific insights into chronic diseases affecting several organ systems can be gained from modeling spatial dependence of sites experiencing damage progression. We describe models and methods for studying spatial dependence of joint damage in psoriatic arthritis (PsA). Since a large number of joints may remain unaffected even among individuals with a long disease history, spatial dependence is first modeled in latent joint-specific indicators of susceptibility. Among susceptible joints, a Gaussian copula is adopted for dependence modeling of times to damage. Likelihood and composite likelihoods are developed for settings, where individuals are under intermittent observation and progression times are subject to type K interval censoring. Two-stage estimation procedures help mitigate the computational burden arising when a large number of processes (i.e., joints) are under consideration. Simulation studies confirm that the proposed methods provide valid inference, and an application to the motivating data from the University of Toronto Psoriatic Arthritis Clinic yields important insights which can help physicians distinguish PsA from arthritic conditions with different dependence patterns.  相似文献   

4.
A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman’s smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér—von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman’s smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.  相似文献   

5.
DiRienzo AG 《Biometrics》2003,59(3):497-504
When testing the null hypothesis that treatment arm-specific survival-time distributions are equal, the log-rank test is asymptotically valid when the distribution of time to censoring is conditionally independent of randomized treatment group given survival time. We introduce a test of the null hypothesis for use when the distribution of time to censoring depends on treatment group and survival time. This test does not make any assumptions regarding independence of censoring time and survival time. Asymptotic validity of this test only requires a consistent estimate of the conditional probability that the survival event is observed given both treatment group and that the survival event occurred before the time of analysis. However, by not making unverifiable assumptions about the data-generating mechanism, there exists a set of possible values of corresponding sample-mean estimates of these probabilities that are consistent with the observed data. Over this subset of the unit square, the proposed test can be calculated and a rejection region identified. A decision on the null that considers uncertainty because of censoring that may depend on treatment group and survival time can then be directly made. We also present a generalized log-rank test that enables us to provide conditions under which the ordinary log-rank test is asymptotically valid. This generalized test can also be used for testing the null hypothesis when the distribution of censoring depends on treatment group and survival time. However, use of this test requires semiparametric modeling assumptions. A simulation study and an example using a recent AIDS clinical trial are provided.  相似文献   

6.
We review the role of density dependence in the stochastic extinction of populations and the role density dependence has played in population viability analysis (PVA) case studies. In total, 32 approaches have been used to model density regulation in theoretical or applied extinction models, 29 of them are mathematical functions of density dependence, and one approach uses empirical relationships between density and survival, reproduction, or growth rates. In addition, quasi-extinction levels are sometimes applied as a substitute for density dependence at low population size. Density dependence further has been modelled via explicit individual spacing behaviour and/or dispersal. We briefly summarise the features of density dependence available in standard PVA software, provide summary statistics about the use of density dependence in PVA case studies, and discuss the effects of density dependence on extinction probability. The introduction of an upper limit for population size has the effect that the probability of ultimate extinction becomes 1. Mean time to extinction increases with carrying capacity if populations start at high density, but carrying capacity often does not have any effect if populations start at low numbers. In contrast, the Allee effect is usually strong when populations start at low densities but has only a limited influence on persistence when populations start at high numbers. Contrary to previous opinions, other forms of density dependence may lead to increased or decreased persistence, depending on the type and strength of density dependence, the degree of environmental variability, and the growth rate. Furthermore, effects may be reversed for different quasi-extinction levels, making the use of arbitrary quasi-extinction levels problematic. Few systematic comparisons of the effects on persistence between different models of density dependence are available. These effects can be strikingly different among models. Our understanding of the effects of density dependence on extinction of metapopulations is rudimentary, but even opposite effects of density dependence can occur when metapopulations and single populations are contrasted. We argue that spatially explicit models hold particular promise for analysing the effects of density dependence on population viability provided a good knowledge of the biology of the species under consideration exists. Since the results of PVAs may critically depend on the way density dependence is modelled, combined efforts to advance statistical methods, field sampling, and modelling are urgently needed to elucidate the relationships between density, vital rates, and extinction probability.  相似文献   

7.
Recently, there has been a great deal of interest in the analysis of multivariate survival data. In most epidemiological studies, survival times of the same cluster are related because of some unobserved risk factors such as the environmental or genetic factors. Therefore, modelling of dependence between events of correlated individuals is required to ensure a correct inference on the effects of treatments or covariates on the survival times. In the past decades, extension of proportional hazards model has been widely considered for modelling multivariate survival data by incorporating a random effect which acts multiplicatively on the hazard function. In this article, we consider the proportional odds model, which is an alternative to the proportional hazards model at which the hazard ratio between individuals converges to unity eventually. This is a reasonable property particularly when the treatment effect fades out gradually and the homogeneity of the population increases over time. The objective of this paper is to assess the influence of the random effect on the within‐subject correlation and the population heterogeneity. We are particularly interested in the properties of the proportional odds model with univariate random effect and correlated random effect. The correlations between survival times are derived explicitly for both choices of mixing distributions and are shown to be independent of the covariates. The time path of the odds function among the survivors are also examined to study the effect of the choice of mixing distribution. Modelling multivariate survival data using a univariate mixing distribution may be inadequate as the random effect not only characterises the dependence of the survival times, but also the conditional heterogeneity among the survivors. A robust estimate for the correlation of the logarithm of the survival times within a cluster is obtained disregarding the choice of the mixing distributions. The sensitivity of the estimate of the regression parameter under a misspecification of the mixing distribution is studied through simulation. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
Summary Array CGH is a high‐throughput technique designed to detect genomic alterations linked to the development and progression of cancer. The technique yields fluorescence ratios that characterize DNA copy number change in tumor versus healthy cells. Classification of tumors based on aCGH profiles is of scientific interest but the analysis of these data is complicated by the large number of highly correlated measures. In this article, we develop a supervised Bayesian latent class approach for classification that relies on a hidden Markov model to account for the dependence in the intensity ratios. Supervision means that classification is guided by a clinical endpoint. Posterior inferences are made about class‐specific copy number gains and losses. We demonstrate our technique on a study of brain tumors, for which our approach is capable of identifying subsets of tumors with different genomic profiles, and differentiates classes by survival much better than unsupervised methods.  相似文献   

9.
Klein JP  Pelz C  Zhang MJ 《Biometrics》1999,55(2):497-506
A normal distribution regression model with a frailty-like factor to account for statistical dependence between the observed survival times is introduced. This model, as opposed to standard hazard-based frailty models, has survival times that, conditional on the shared random effect, have an accelerated failure time representation. The dependence properties of this model are discussed and maximum likelihood estimation of the model's parameters is considered. A number of examples are considered to illustrate the approach. The estimated degree of dependence is comparable to other models, but the present approach has the advantage that the interpretation of the random effect is simpler than in the frailty model.  相似文献   

10.
Negative density dependence contributes to seedling dynamics in forested ecosystems, but the relative importance of this factor for different woody plant life‐forms is not well‐understood. We used 1 yr of seedling survivorship data for woody seedlings in 17 different plots of lower to mid‐montane rain forests on the island of Dominica to examine how seedling height, abiotic factors, and biotic factors such as negative density dependence are related to seedling survival of five different life‐forms (canopy, midstory, and understory trees; shrubs; and lianas). Across 64 species, taller seedlings in seedling plots with higher canopy openness, greater seedling density, lower relative abundance of conspecific seedlings, and lower relative abundance of conspecific adults generally had a greater probability of surviving. Height was the strongest predictor of seedling survival for all life‐forms except lianas. Greater seedling density was positively related to survival for canopy and midstory trees but negatively related to survival for the other life‐forms. For trees, the relative abundance of conspecific seedling and adult neighbors had weak and strong negative effects on survival respectively. Neither shrub nor liana seedling survival was affected by the relative abundance of conspecific neighbors. Thus, negative density dependence is confirmed as an important structuring mechanism for tree seedling communities but does not seem to be important for lianas and shrubs in Dominican rain forests. These results represent the first direct assessment of controls on seedling survival of all woody life‐forms – an important step in understanding the dynamics and structure of the entire woody plant community.  相似文献   

11.
Yue Wei  Yi Liu  Tao Sun  Wei Chen  Ying Ding 《Biometrics》2020,76(2):619-629
Several gene-based association tests for time-to-event traits have been proposed recently to detect whether a gene region (containing multiple variants), as a set, is associated with the survival outcome. However, for bivariate survival outcomes, to the best of our knowledge, there is no statistical method that can be directly applied for gene-based association analysis. Motivated by a genetic study to discover the gene regions associated with the progression of a bilateral eye disease, age-related macular degeneration (AMD), we implement a novel functional regression (FR) method under the copula framework. Specifically, the effects of variants within a gene region are modeled through a functional linear model, which then contributes to the marginal survival functions within the copula. Generalized score test statistics are derived to test for the association between bivariate survival traits and the genetic region. Extensive simulation studies are conducted to evaluate the type I error control and power performance of the proposed approach, with comparisons to several existing methods for a single survival trait, as well as the marginal Cox FR model using the robust sandwich estimator for bivariate survival traits. Finally, we apply our method to a large AMD study, the Age-related Eye Disease Study, and to identify the gene regions that are associated with AMD progression.  相似文献   

12.
Mandel M  Betensky RA 《Biometrics》2007,63(2):405-412
Several goodness-of-fit tests of a lifetime distribution have been suggested in the literature; many take into account censoring and/or truncation of event times. In some contexts, a goodness-of-fit test for the truncation distribution is of interest. In particular, better estimates of the lifetime distribution can be obtained when knowledge of the truncation law is exploited. In cross-sectional sampling, for example, there are theoretical justifications for the assumption of a uniform truncation distribution, and several studies have used it to improve the efficiency of their survival estimates. The duality of lifetime and truncation in the absence of censoring enables methods for testing goodness of fit of the lifetime distribution to be used for testing goodness of fit of the truncation distribution. However, under random censoring, this duality does not hold and different tests are required. In this article, we introduce several goodness-of-fit tests for the truncation distribution and investigate their performance in the presence of censored event times using simulation. We demonstrate the use of our tests on two data sets.  相似文献   

13.
Dendukuri N  Joseph L 《Biometrics》2001,57(1):158-167
Many analyses of results from multiple diagnostic tests assume the tests are statistically independent conditional on the true disease status of the subject. This assumption may be violated in practice, especially in situations where none of the tests is a perfectly accurate gold standard. Classical inference for models accounting for the conditional dependence between tests requires that results from at least four different tests be used in order to obtain an identifiable solution, but it is not always feasible to have results from this many tests. We use a Bayesian approach to draw inferences about the disease prevalence and test properties while adjusting for the possibility of conditional dependence between tests, particularly when we have only two tests. We propose both fixed and random effects models. Since with fewer than four tests the problem is nonidentifiable, the posterior distributions are strongly dependent on the prior information about the test properties and the disease prevalence, even with large sample sizes. If the degree of correlation between the tests is known a priori with high precision, then our methods adjust for the dependence between the tests. Otherwise, our methods provide adjusted inferences that incorporate all of the uncertainty inherent in the problem, typically resulting in wider interval estimates. We illustrate our methods using data from a study on the prevalence of Strongyloides infection among Cambodian refugees to Canada.  相似文献   

14.
The copula of a bivariate distribution, constructed by making marginal transformations of each component, captures all the information in the bivariate distribution about the dependence between two variables. For frailty models for bivariate data the choice of a family of distributions for the random frailty corresponds to the choice of a parametric family for the copula. A class of tests of the hypothesis that the copula is in a given parametric family, with unspecified association parameter, based on bivariate right censored data is proposed. These tests are based on first making marginal Kaplan-Meier transformations of the data and then comparing a non-parametric estimate of the copula to an estimate based on the assumed family of models. A number of options are available for choosing the scale and the distance measure for this comparison. Significance levels of the test are found by a modified bootstrap procedure. The procedure is used to check the appropriateness of a gamma or a positive stable frailty model in a set of survival data on Danish twins.  相似文献   

15.
Han F  Pan W 《Biometrics》2012,68(1):307-315
Many statistical tests have been proposed for case-control data to detect disease association with multiple single nucleotide polymorphisms (SNPs) in linkage disequilibrium. The main reason for the existence of so many tests is that each test aims to detect one or two aspects of many possible distributional differences between cases and controls, largely due to the lack of a general and yet simple model for discrete genotype data. Here we propose a latent variable model to represent SNP data: the observed SNP data are assumed to be obtained by discretizing a latent multivariate Gaussian variate. Because the latent variate is multivariate Gaussian, its distribution is completely characterized by its mean vector and covariance matrix, in contrast to much more complex forms of a general distribution for discrete multivariate SNP data. We propose a composite likelihood approach for parameter estimation. A direct application of this latent variable model is to association testing with multiple SNPs in a candidate gene or region. In contrast to many existing tests that aim to detect only one or two aspects of many possible distributional differences of discrete SNP data, we can exclusively focus on testing the mean and covariance parameters of the latent Gaussian distributions for cases and controls. Our simulation results demonstrate potential power gains of the proposed approach over some existing methods.  相似文献   

16.
ABSTRACT.   Accurate determination of nest fates and nest predators is possible through continuous video monitoring, but such monitoring is relatively expensive and labor intensive. If documenting of the timing of nest termination events is sufficient, then data loggers (DL) may allow more extensive sampling and may represent a viable alternative. I validated temperature DL records of nest survival time by simultaneous videotaping and compared results derived from DL records with those obtained by regular nest visits by an observer. I estimated the fate of 937 nests of nine species of open cup-nesting songbirds, including 673 nests monitored using DL, 165 monitored using video cameras, 33 validation nests monitored simultaneously using both DL and video cameras, and 132 control nests monitored only by observer visits. Deployment of DL did not negatively influence nest survival rate. DL reliably recorded survival time and allowed classification of nest fates based on the potential fledging age, regardless of the frequency of nest visits by an observer. The true fate of nests that survived beyond the potential fledging age can not be safely determined from time of failure, except for nocturnal events that suggest partial predation. Video revealed frequent partial or complete predation on nests with old nestlings that would have been categorized as successful by other methods. I conclude that temperature DL are efficient, reliable, and relatively inexpensive tools for recording exact nest survival times and classification of nest fates, with implications for nest survival modeling and discriminating between diurnal and nocturnal predation.  相似文献   

17.
Recent advances in genotyping technology make it possible to utilize large-scale association analysis for disease-gene mapping. Powerful and robust family-based association methods are crucial for successful gene mapping. We propose a family-based association method, the generalized disequilibrium test (GDT), in which the genotype differences of all discordant relative pairs are utilized in assessing association within a family. The improvement of the GDT over existing methods is threefold: (1) information beyond first-degree relatives is incorporated efficiently, yielding substantial gains in power in comparison to existing tests; (2) the GDT statistic is implemented via a robust technique that does not rely on large sample theory, resulting in further power gains, especially at high levels of significance; and (3) covariates and weights based on family size are incorporated. Advantages of the GDT over existing methods are demonstrated by extensive computer simulations and by application to recently published large-scale genome-wide linkage data from the Type 1 Diabetes Genetics Consortium (T1DGC). In our simulations, the GDT consistently outperforms other tests for a common disease and frequently outperforms other tests for a rare disease; the power improvement is > 13% in 6 out of 8 extended pedigree scenarios. All of the six strongest associations identified by the GDT have been reported by other studies, whereas only three or four of these associations can be identified by existing methods. For the T1D association at gene UBASH3A, the GDT resulted in a genome-wide significance (p = 4.3 × 10−6), much stronger than the published significance (p = 10−4).  相似文献   

18.
Lu Mao 《Biometrics》2023,79(1):61-72
The restricted mean time in favor (RMT-IF) of treatment is a nonparametric effect size for complex life history data. It is defined as the net average time the treated spend in a more favorable state than the untreated over a prespecified time window. It generalizes the familiar restricted mean survival time (RMST) from the two-state life–death model to account for intermediate stages in disease progression. The overall estimand can be additively decomposed into stage-wise effects, with the standard RMST as a component. Alternate expressions of the overall and stage-wise estimands as integrals of the marginal survival functions for a sequence of landmark transitioning events allow them to be easily estimated by plug-in Kaplan–Meier estimators. The dynamic profile of the estimated treatment effects as a function of follow-up time can be visualized using a multilayer, cone-shaped “bouquet plot.” Simulation studies under realistic settings show that the RMT-IF meaningfully and accurately quantifies the treatment effect and outperforms traditional tests on time to the first event in statistical efficiency thanks to its fuller utilization of patient data. The new methods are illustrated on a colon cancer trial with relapse and death as outcomes and a cardiovascular trial with recurrent hospitalizations and death as outcomes. The R-package rmt implements the proposed methodology and is publicly available from the Comprehensive R Archive Network (CRAN).  相似文献   

19.
Ross EA  Moore D 《Biometrics》1999,55(3):813-819
We have developed methods for modeling discrete or grouped time, right-censored survival data collected from correlated groups or clusters. We assume that the marginal hazard of failure for individual items within a cluster is specified by a linear log odds survival model and the dependence structure is based on a gamma frailty model. The dependence can be modeled as a function of cluster-level covariates. Likelihood equations for estimating the model parameters are provided. Generalized estimating equations for the marginal hazard regression parameters and pseudolikelihood methods for estimating the dependence parameters are also described. Data from two clinical trials are used for illustration purposes.  相似文献   

20.
Comparative genome hybridization (CGH) is a laboratory method to measure gains and losses of chromosomal regions in tumor cells. It is believed that DNA gains and losses in tumor cells do not occur entirely at random, but partly through some flow of causality. Models that relate tumor progression to the occurrence of DNA gains and losses could be very useful in hunting cancer genes and in cancer diagnosis. We lay some mathematical foundations for inferring a model of tumor progression from a CGH data set. We consider a class of tree models that are more general than a path model that has been developed for colorectal cancer. We derive a tree model inference algorithm based on the idea of a maximum-weight branching in a graph, and we show that under plausible assumptions our algorithm infers the correct tree. We have implemented our methods in software, and we illustrate with a CGH data set for renal cancer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号