首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The classical model for the analysis of progression of markers in HIV-infected patients is the mixed effects linear model. However, longitudinal studies of viral load are complicated by left censoring of the measures due to a lower quantification limit. We propose a full likelihood approach to estimate parameters from the linear mixed effects model for left-censored Gaussian data. For each subject, the contribution to the likelihood is the product of the density for the vector of the completely observed outcome and of the conditional distribution function of the vector of the censored outcome, given the observed outcomes. Values of the distribution function were computed by numerical integration. The maximization is performed by a combination of the Simplex algorithm and the Marquardt algorithm. Subject-specific deviations and random effects are estimated by modified empirical Bayes replacing censored measures by their conditional expectations given the data. A simulation study showed that the proposed estimators are less biased than those obtained by imputing the quantification limit to censored data. Moreover, for models with complex covariance structures, they are less biased than Monte Carlo expectation maximization (MCEM) estimators developed by Hughes (1999) Mixed effects models with censored data with application to HIV RNA Levels. Biometrics 55, 625-629. The method was then applied to the data of the ALBI-ANRS 070 clinical trial for which HIV-1 RNA levels were measured with an ultrasensitive assay (quantification limit 50 copies/ml). Using the proposed method, estimates obtained with data artificially censored at 500 copies/ml were close to those obtained with the real data set.  相似文献   

2.
Wei E  Wei LJ  Xu X 《Human heredity》2003,55(2-3):143-146
Consider the case that individual phenotype and genotype observations were collected from a large or moderate number of pedigrees. Some of the pedigrees have multi-generation nuclear families. For each nuclear family, the phenotype trait value of each sibling is the time to onset for a specific event (e.g., disease). Often, this event time may be right censored, that is, an individual is event-free at the study examination time point. In this article, we propose a purely nonparametric test for testing if the distribution of a Haseman-Elston distance measure between two siblings' event times is independent of their mean genetic sharing identical by descent at a genetic marker based on such incomplete observations from all the nuclear families. The new test can be implemented easily and is illustrated with a data set from the Genetic Analysis Workshop 12. The validity of the new test is examined via a simulation study.  相似文献   

3.
Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study.  相似文献   

4.
Mixed case interval‐censored data arise when the event of interest is known only to occur within an interval induced by a sequence of random examination times. Such data are commonly encountered in disease research with longitudinal follow‐up. Furthermore, the medical treatment has progressed over the last decade with an increasing proportion of patients being cured for many types of diseases. Thus, interest has grown in cure models for survival data which hypothesize a certain proportion of subjects in the population are not expected to experience the events of interest. In this article, we consider a two‐component mixture cure model for regression analysis of mixed case interval‐censored data. The first component is a logistic regression model that describes the cure rate, and the second component is a semiparametric transformation model that describes the distribution of event time for the uncured subjects. We propose semiparametric maximum likelihood estimation for the considered model. We develop an EM type algorithm for obtaining the semiparametric maximum likelihood estimators (SPMLE) of regression parameters and establish their consistency, efficiency, and asymptotic normality. Extensive simulation studies indicate that the SPMLE performs satisfactorily in a wide variety of settings. The proposed method is illustrated by the analysis of the hypobaric decompression sickness data from National Aeronautics and Space Administration.  相似文献   

5.
In this paper we present an extension of cure models: to incorporate a longitudinal disease progression marker. The model is motivated by studies of patients with prostate cancer undergoing radiation therapy. The patients are followed until recurrence of the prostate cancer or censoring, with the PSA marker measured intermittently. Some patients are cured by the treatment and are immune from recurrence. A joint-cure model is developed for this type of data, in which the longitudinal marker and the failure time process are modeled jointly, with a fraction of patients assumed to be immune from the endpoint. A hierarchical nonlinear mixed-effects model is assumed for the marker and a time-dependent Cox proportional hazards model is used to model the time to endpoint. The probability of cure is modeled by a logistic link. The parameters are estimated using a Monte Carlo EM algorithm. Importance sampling with an adaptively chosen t-distribution and variable Monte Carlo sample size is used. We apply the method to data from prostate cancer and perform a simulation study. We show that by incorporating the longitudinal disease progression marker into the cure model, we obtain parameter estimates with better statistical properties. The classification of the censored patients into the cure group and the susceptible group based on the estimated conditional recurrence probability from the joint-cure model has a higher sensitivity and specificity, and a lower misclassification probability compared with the standard cure model. The addition of the longitudinal data has the effect of reducing the impact of the identifiability problems in a standard cure model and can help overcome biases due to informative censoring.  相似文献   

6.
In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.  相似文献   

7.
In this paper, we consider the estimation of prediction errors for state occupation probabilities and transition probabilities for multistate time‐to‐event data. We study prediction errors based on the Brier score and on the Kullback–Leibler score and prove their properness. In the presence of right‐censored data, two classes of estimators, based on inverse probability weighting and pseudo‐values, respectively, are proposed, and consistency properties of the proposed estimators are investigated. The second part of the paper is devoted to the estimation of dynamic prediction errors for state occupation probabilities for multistate models, conditional on being alive, and for transition probabilities. Cross‐validated versions are proposed. Our methods are illustrated on the CSL1 randomized clinical trial comparing prednisone versus placebo for liver cirrhosis patients.  相似文献   

8.
This paper deals with a Cox proportional hazards regression model, where some covariates of interest are randomly right‐censored. While methods for censored outcomes have become ubiquitous in the literature, methods for censored covariates have thus far received little attention and, for the most part, dealt with the issue of limit‐of‐detection. For randomly censored covariates, an often‐used method is the inefficient complete‐case analysis (CCA) which consists in deleting censored observations in the data analysis. When censoring is not completely independent, the CCA leads to biased and spurious results. Methods for missing covariate data, including type I and type II covariate censoring as well as limit‐of‐detection do not readily apply due to the fundamentally different nature of randomly censored covariates. We develop a novel method for censored covariates using a conditional mean imputation based on either Kaplan–Meier estimates or a Cox proportional hazards model to estimate the effects of these covariates on a time‐to‐event outcome. We evaluate the performance of the proposed method through simulation studies and show that it provides good bias reduction and statistical efficiency. Finally, we illustrate the method using data from the Framingham Heart Study to assess the relationship between offspring and parental age of onset of cardiovascular events.  相似文献   

9.
We propose a general likelihood-based approach to the linkage analysis of qualitative and quantitative traits using identity by descent (IBD) data from sib-pairs. We consider the likelihood of IBD data conditional on phenotypes and test the null hypothesis of no linkage between a marker locus and a gene influencing the trait using a score test in the recombination fraction theta between the two loci. This method unifies the linkage analysis of qualitative and quantitative traits into a single inferential framework, yielding a simple and intuitive test statistic. Conditioning on phenotypes avoids unrealistic random sampling assumptions and allows sib-pairs from differing ascertainment mechanisms to be incorporated into a single likelihood analysis. In particular, it allows the selection of sib-pairs based on their trait values and the analysis of only those pairs having the most informative phenotypes. The score test is based on the full likelihood, i.e. the likelihood based on all phenotype data rather than just differences of sib-pair phenotypes. Considering only phenotype differences, as in Haseman and Elston (1972) and Kruglyak and Lander (1995), may result in important losses in power. The linkage score test is derived under general genetic models for the trait, which may include multiple unlinked genes. Population genetic assumptions, such as random mating or linkage equilibrium at the trait loci, are not required. This score test is thus particularly promising for the analysis of complex human traits. The score statistic readily extends to accommodate incomplete IBD data at the test locus, by using the hidden Markov model implemented in the programs MAPMAKER/SIBS and GENEHUNTER (Kruglyak and Lander, 1995; Kruglyak et al., 1996). Preliminary simulation studies indicate that the linkage score test generally matches or outperforms the Haseman-Elston test, the largest gains in power being for selected samples of sib-pairs with extreme phenotypes.  相似文献   

10.
In cohort studies the outcome is often time to a particular event, and subjects are followed at regular intervals. Periodic visits may also monitor a secondary irreversible event influencing the event of primary interest, and a significant proportion of subjects develop the secondary event over the period of follow‐up. The status of the secondary event serves as a time‐varying covariate, but is recorded only at the times of the scheduled visits, generating incomplete time‐varying covariates. While information on a typical time‐varying covariate is missing for entire follow‐up period except the visiting times, the status of the secondary event are unavailable only between visits where the status has changed, thus interval‐censored. One may view interval‐censored covariate of the secondary event status as missing time‐varying covariates, yet missingness is partial since partial information is provided throughout the follow‐up period. Current practice of using the latest observed status produces biased estimators, and the existing missing covariate techniques cannot accommodate the special feature of missingness due to interval censoring. To handle interval‐censored covariates in the Cox proportional hazards model, we propose an available‐data estimator, a doubly robust‐type estimator as well as the maximum likelihood estimator via EM algorithm and present their asymptotic properties. We also present practical approaches that are valid. We demonstrate the proposed methods using our motivating example from the Northern Manhattan Study.  相似文献   

11.
This paper focuses on the methodology developed for analyzing a multivariate interval-censored data set from an AIDS observational study. A purpose of the study was to determine the natural history of the opportunistic infection cytomeglovirus (CMV) in an HIV-infected individual. For this observational study, laboratory tests were performed at scheduled clinic visits to test for the presence of the CMV virus in the blood and in the urine (called CMV shedding in the blood and urine). The study investigators were interested in determining whether the stage of HIV disease at study entry was predictive of an increased risk for CMV shedding in either the blood or the urine. If all patients had made each clinic visit, the data would be multivariate grouped failure time data and published methods could be used. However, many patients missed several visits, and when they returned, their lab tests indicated a change in their blood and/or urine CMV shedding status, resulting in interval-censored failure time data. This paper outlines a method for applying the proportional hazards model to the analysis of multivariate interval-censored failure time data from a study of CMV in HIV-infected patients.  相似文献   

12.
In some infectious disease studies and 2‐step treatment studies, 2 × 2 table with structural zero could arise in situations where it is theoretically impossible for a particular cell to contain observations or structural void is introduced by design. In this article, we propose a score test of hypotheses pertaining to the marginal and conditional probabilities in a 2 × 2 table with structural zero via the risk/rate difference measure. Score test‐based confidence interval will also be outlined. We evaluate the performance of the score test and the existing likelihood ratio test. Our empirical results evince the similar and satisfactory performance of the two tests (with appropriate adjustments) in terms of coverage probability and expected interval width. Both tests consistently perform well from small‐ to moderate‐sample designs. The score test however has the advantage that it is only undefined in one scenario while the likelihood ratio test can be undefined in many scenarios. We illustrate our method by a real example from a two‐step tuberculosis skin test study.  相似文献   

13.
Interval‐censored recurrent event data arise when the event of interest is not readily observed but the cumulative event count can be recorded at periodic assessment times. In some settings, chronic disease processes may resolve, and individuals will cease to be at risk of events at the time of disease resolution. We develop an expectation‐maximization algorithm for fitting a dynamic mover‐stayer model to interval‐censored recurrent event data under a Markov model with a piecewise‐constant baseline rate function given a latent process. The model is motivated by settings in which the event times and the resolution time of the disease process are unobserved. The likelihood and algorithm are shown to yield estimators with small empirical bias in simulation studies. Data are analyzed on the cumulative number of damaged joints in patients with psoriatic arthritis where individuals experience disease remission.  相似文献   

14.
Liang Li  Bo Hu  Tom Greene 《Biometrics》2009,65(3):737-745
Summary .  In many longitudinal clinical studies, the level and progression rate of repeatedly measured biomarkers on each subject quantify the severity of the disease and that subject's susceptibility to progression of the disease. It is of scientific and clinical interest to relate such quantities to a later time-to-event clinical endpoint such as patient survival. This is usually done with a shared parameter model. In such models, the longitudinal biomarker data and the survival outcome of each subject are assumed to be conditionally independent given subject-level severity or susceptibility (also called frailty in statistical terms). In this article, we study the case where the conditional distribution of longitudinal data is modeled by a linear mixed-effect model, and the conditional distribution of the survival data is given by a Cox proportional hazard model. We allow unknown regression coefficients and time-dependent covariates in both models. The proposed estimators are maximizers of an exact correction to the joint log likelihood with the frailties eliminated as nuisance parameters, an idea that originated from correction of covariate measurement error in measurement error models. The corrected joint log likelihood is shown to be asymptotically concave and leads to consistent and asymptotically normal estimators. Unlike most published methods for joint modeling, the proposed estimation procedure does not rely on distributional assumptions of the frailties. The proposed method was studied in simulations and applied to a data set from the Hemodialysis Study.  相似文献   

15.
In this article, we describe a conditional score test for detecting a monotone dose‐response relationship with ordinal response data. We consider three different versions of this test: asymptotic, conditional exact, and mid‐P conditional score test. Exact and asymptotic power formulae based on these tests will be studied. Asymptotic sample size formulae based on the asymptotic conditional score test will be derived. The proposed formulae are applied to a vaccination study and a developmental toxicity study for illustrative purposes. Actual significance level and exact power properties of these tests are compared in a small empirical study. The mid‐P conditional score test is observed to be the most powerful test with actual significance level close to the pre‐specified nominal level.  相似文献   

16.
In follow‐up studies, the disease event time can be subject to left truncation and right censoring. Furthermore, medical advancements have made it possible for patients to be cured of certain types of diseases. In this article, we consider a semiparametric mixture cure model for the regression analysis of left‐truncated and right‐censored data. The model combines a logistic regression for the probability of event occurrence with the class of transformation models for the time of occurrence. We investigate two techniques for estimating model parameters. The first approach is based on martingale estimating equations (EEs). The second approach is based on the conditional likelihood function given truncation variables. The asymptotic properties of both proposed estimators are established. Simulation studies indicate that the conditional maximum‐likelihood estimator (cMLE) performs well while the estimator based on EEs is very unstable even though it is shown to be consistent. This is a special and intriguing phenomenon for the EE approach under cure model. We provide insights into this issue and find that the EE approach can be improved significantly by assigning appropriate weights to the censored observations in the EEs. This finding is useful in overcoming the instability of the EE approach in some more complicated situations, where the likelihood approach is not feasible. We illustrate the proposed estimation procedures by analyzing the age at onset of the occiput‐wall distance event for patients with ankylosing spondylitis.  相似文献   

17.
Zhao H  Zuo C  Chen S  Bang H 《Biometrics》2012,68(3):717-725
Summary Increasingly, estimations of health care costs are used to evaluate competing treatments or to assess the expected expenditures associated with certain diseases. In health policy and economics, the primary focus of these estimations has been on the mean cost, because the total cost can be derived directly from the mean cost, and because information about total resources utilized is highly relevant for policymakers. Yet, the median cost also could be important, both as an intuitive measure of central tendency in cost distribution and as a subject of interest to payers and consumers. In many prospective studies, cost data collection is sometimes incomplete for some subjects due to right censoring, which typically is caused by loss to follow-up or by limited study duration. Censoring poses a unique challenge for cost data analysis because of so-called induced informative censoring, in that traditional methods suited for survival data generally are invalid in censored cost estimation. In this article, we propose methods for estimating the median cost and its confidence interval (CI) when data are subject to right censoring. We also consider the estimation of the ratio and difference of two median costs and their CIs. These methods can be extended to the estimation of other quantiles and other informatively censored data. We conduct simulation and real data analysis in order to examine the performance of the proposed methods.  相似文献   

18.
Cho M  Schenker N 《Biometrics》1999,55(3):826-833
Data obtained from studies in the health sciences often have incompletely observed covariates as well as censored outcomes. In this paper, we present methods for fitting the log-F accelerated failure time model with incomplete continuous and/or categorical time-independent covariates using the Gibbs sampler. A general location model that allows different covariance structures across cells is specified for the covariates, and ignorable missingness of the covariates is assumed. Techniques that accommodate standard assumptions of ignorable censoring as well as certain types of nonignorable censoring are developed. We compare our approach to traditional complete-case analysis in an application to data obtained from a study of melanoma. The comparison indicates that substantial gains in efficiency are possible with our approach.  相似文献   

19.
For many diseases, it seems that the age at onset is geneticallyinfluenced. Therefore, the age-at-onset data are often collectedin order to map the disease gene(s). The ages are often (right)censored or truncated, and therefore, many standard techniquesfor linkage analysis cannot be used. In this paper, we presenta correlated frailty model for censored survival data of siblings.The model is used for testing heritability for the age at onsetand linkage between the loci and the gene(s) that influence(s)the survival time. The model is applied to interval-censoredmigraine twin data. Heritability (obtained from the frailtiesrather than actual onset times) was estimated as 0.42; thisvalue was highly significant. The highest lod score, a scoreof 1.9, was found at the end of chromosome 19.  相似文献   

20.
Summary .  Latent class models have been recently developed for the joint analysis of a longitudinal quantitative outcome and a time to event. These models assume that the population is divided in  G  latent classes characterized by different risk functions for the event, and different profiles of evolution for the markers that are described by a mixed model for each class. However, the key assumption of conditional independence between the marker and the event given the latent classes is difficult to evaluate because the latent classes are not observed. Using a joint model with latent classes and shared random effects, we propose a score test for the null hypothesis of independence between the marker and the outcome given the latent classes versus the alternative hypothesis that the risk of event depends on one or several random effects from the mixed model in addition to the latent classes. A simulation study was performed to compare the behavior of the score test to other previously proposed tests, including situations where the alternative hypothesis or the baseline risk function are misspecified. In all the investigated situations, the score test was the most powerful. The methodology was applied to develop a prognostic model for recurrence of prostate cancer given the evolution of prostate-specific antigen in a cohort of patients treated by radiation therapy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号