首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A common testing problem for a life table or survival data is to test the equality of two survival distributions when the data is both grouped and censored. Several tests have been proposed in the literature which require various assumptions about the censoring distributions. It is shown that if these conditions are relaxed then the tests may no longer have the stated properties. The maximum likelihood test of equality when no assumptions are made about the censoring marginal distributions is derived. The properties of the test are found and it is compared to the existing tests. The fact that no assumptions are required about the censoring distributions make the test a useful initial testing procedure.  相似文献   

2.
3.
DiRienzo AG 《Biometrics》2003,59(3):497-504
When testing the null hypothesis that treatment arm-specific survival-time distributions are equal, the log-rank test is asymptotically valid when the distribution of time to censoring is conditionally independent of randomized treatment group given survival time. We introduce a test of the null hypothesis for use when the distribution of time to censoring depends on treatment group and survival time. This test does not make any assumptions regarding independence of censoring time and survival time. Asymptotic validity of this test only requires a consistent estimate of the conditional probability that the survival event is observed given both treatment group and that the survival event occurred before the time of analysis. However, by not making unverifiable assumptions about the data-generating mechanism, there exists a set of possible values of corresponding sample-mean estimates of these probabilities that are consistent with the observed data. Over this subset of the unit square, the proposed test can be calculated and a rejection region identified. A decision on the null that considers uncertainty because of censoring that may depend on treatment group and survival time can then be directly made. We also present a generalized log-rank test that enables us to provide conditions under which the ordinary log-rank test is asymptotically valid. This generalized test can also be used for testing the null hypothesis when the distribution of censoring depends on treatment group and survival time. However, use of this test requires semiparametric modeling assumptions. A simulation study and an example using a recent AIDS clinical trial are provided.  相似文献   

4.
A robust test for linear contrast using modified maximum likelihood estimators based on symmetrically censored samples proposed by Tiku (1973, 1982a) is studied in this paper from the Bayesian point of view. The effects of asymmetric censoring on this testing procedure is investigated and a good approximation to its posterior distribution in this case is worked out. We also present an example which illustrates the method of obtaining the highest posterior density interval for the linear combination of the unknown location parameters.  相似文献   

5.
Cook AJ  Gold DR  Li Y 《Biometrics》2007,63(2):540-549
While numerous methods have been proposed to test for spatial cluster detection, in particular for discrete outcome data (e.g., disease incidence), few have been available for continuous data that are subject to censoring. This article provides an extension of the spatial scan statistic (Kulldorff, 1997, Communications in Statistics 26, 1481-1496) for censored outcome data and further proposes a simple spatial cluster detection method by utilizing cumulative martingale residuals within the framework of the Cox's proportional hazards models. Simulations have indicated good performance of the proposed methods, with the practical applicability illustrated by an ongoing epidemiology study which investigates the relationship of environmental exposures to asthma, allergic rhinitis/hayfever, and eczema.  相似文献   

6.
This paper deals with testing the functional form of the covariate effects in a Cox proportional hazards model with random effects. We assume that the responses are clustered and incomplete due to right censoring. The estimation of the model under the null (parametric covariate effect) and the alternative (nonparametric effect) is performed using the full marginal likelihood. Under the alternative, the nonparametric covariate effects are estimated using orthogonal expansions. The test statistic is the likelihood ratio statistic, and its distribution is approximated using a bootstrap method. The performance of the proposed testing procedure is studied through simulations. The method is also applied on two real data sets one from biomedical research and one from veterinary medicine.  相似文献   

7.
A multiple testing procedure for clinical trials.   总被引:57,自引:0,他引:57  
A multiple testing procedure is proposed for comparing two treatments when response to treatment is both dichotomous (i.e., success or failure) and immediate. The proposed test statistic for each test is the usual (Pearson) chi-square statistic based on all data collected to that point. The maximum number (N) of tests and the number (m1 + m2) of observations collected between successive tests is fixed in advance. The overall size of the procedure is shown to be controlled with virtually the same accuracy as the single sample chi-square test based on N(m1 + m2) observations. The power is also found to be virtually the same. However, by affording the opportunity to terminate early when one treatment performs markedly better than the other, the multiple testing procedure may eliminate the ethical dilemmas that often accompany clinical trials.  相似文献   

8.
Shih JH  Lu SE 《Biometrics》2007,63(3):673-680
We consider the problem of estimating covariate effects in the marginal Cox proportional hazard model and multilevel associations for child mortality data collected from a vitamin A supplementation trial in Nepal, where the data are clustered within households and villages. For this purpose, a class of multivariate survival models that can be represented by a functional of marginal survival functions and accounts for hierarchical structure of clustering is exploited. Based on this class of models, an estimation strategy involving a within-cluster resampling procedure is proposed, and a model assessment approach is presented. The asymptotic theory for the proposed estimators and lack-of-fit test is established. The simulation study shows that the estimates are approximately unbiased, and the proposed test statistic is conservative under extremely heavy censoring but approaches the size otherwise. The analysis of the Nepal study data shows that the association of mortality is much greater within households than within villages.  相似文献   

9.
Summary We propose a Bayesian chi‐squared model diagnostic for analysis of data subject to censoring. The test statistic has the form of Pearson's chi‐squared test statistic and is easy to calculate from standard output of Markov chain Monte Carlo algorithms. The key innovation of this diagnostic is that it is based only on observed failure times. Because it does not rely on the imputation of failure times for observations that have been censored, we show that under heavy censoring it can have higher power for detecting model departures than a comparable test based on the complete data. In a simulation study, we show that tests based on this diagnostic exhibit comparable power and better nominal Type I error rates than a commonly used alternative test proposed by Akritas (1988, Journal of the American Statistical Association 83, 222–230). An important advantage of the proposed diagnostic is that it can be applied to a broad class of censored data models, including generalized linear models and other models with nonidentically distributed and nonadditive error structures. We illustrate the proposed model diagnostic for testing the adequacy of two parametric survival models for Space Shuttle main engine failures.  相似文献   

10.
Switching between testing for superiority and non-inferiority has been an important statistical issue in the design and analysis of active controlled clinical trial. In practice, it is often conducted with a two-stage testing procedure. It has been assumed that there is no type I error rate adjustment required when either switching to test for non-inferiority once the data fail to support the superiority claim or switching to test for superiority once the null hypothesis of non-inferiority is rejected with a pre-specified non-inferiority margin in a generalized historical control approach. However, when using a cross-trial comparison approach for non-inferiority testing, controlling the type I error rate sometimes becomes an issue with the conventional two-stage procedure. We propose to adopt a single-stage simultaneous testing concept as proposed by Ng (2003) to test both non-inferiority and superiority hypotheses simultaneously. The proposed procedure is based on Fieller's confidence interval procedure as proposed by Hauschke et al. (1999).  相似文献   

11.
The method of generalized pairwise comparisons (GPC) is an extension of the well-known nonparametric Wilcoxon–Mann–Whitney test for comparing two groups of observations. Multiple generalizations of Wilcoxon–Mann–Whitney test and other GPC methods have been proposed over the years to handle censored data. These methods apply different approaches to handling loss of information due to censoring: ignoring noninformative pairwise comparisons due to censoring (Gehan, Harrell, and Buyse); imputation using estimates of the survival distribution (Efron, Péron, and Latta); or inverse probability of censoring weighting (IPCW, Datta and Dong). Based on the GPC statistic, a measure of treatment effect, the “net benefit,” can be defined. It quantifies the difference between the probabilities that a randomly selected individual from one group is doing better than an individual from the other group. This paper aims at evaluating GPC methods for censored data, both in the context of hypothesis testing and estimation, and providing recommendations related to their choice in various situations. The methods that ignore uninformative pairs have comparable power to more complex and computationally demanding methods in situations of low censoring, and are slightly superior for high proportions (>40%) of censoring. If one is interested in estimation of the net benefit, Harrell's c index is an unbiased estimator if the proportional hazards assumption holds. Otherwise, the imputation (Efron or Peron) or IPCW (Datta, Dong) methods provide unbiased estimators in case of proportions of drop-out censoring up to 60%.  相似文献   

12.
A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman’s smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér—von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman’s smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.  相似文献   

13.
In a randomized clinical trial (RCT), noncompliance with an assigned treatment can occur due to serious side effects, while missing outcomes on patients may happen due to patients' withdrawal or loss to follow up. To avoid the possible loss of power to detect a given risk difference (RD) of interest between two treatments, it is essentially important to incorporate the information on noncompliance and missing outcomes into sample size calculation. Under the compound exclusion restriction model proposed elsewhere, we first derive the maximum likelihood estimator (MLE) of the RD among compliers between two treatments for a RCT with noncompliance and missing outcomes and its asymptotic variance in closed form. Based on the MLE with tanh(-1)(x) transformation, we develop an asymptotic test procedure for testing equality of two treatment effects among compliers. We further derive a sample size calculation formula accounting for both noncompliance and missing outcomes for a desired power 1 - beta at a nominal alpha-level. To evaluate the performance of the test procedure and the accuracy of the sample size calculation formula, we employ Monte Carlo simulation to calculate the estimated Type I error and power of the proposed test procedure corresponding to the resulting sample size in a variety of situations. We find that both the test procedure and the sample size formula developed here can perform well. Finally, we include a discussion on the effects of various parameters, including the proportion of compliers, the probability of non-missing outcomes, and the ratio of sample size allocation, on the minimum required sample size.  相似文献   

14.
To compare two exponential distributions with or without censoring, two different statistics are often used; one is the F test proposed by COX (1953) and the other is based on the efficient score procedure. In this paper, the relationship between these tests is investigated and it is shown that the efficient score test is a large-sample approximation of the F test.  相似文献   

15.
León LF  Tsai CL 《Biometrics》2004,60(1):75-84
We propose a new type of residual and an easily computed functional form test for the Cox proportional hazards model. The proposed test is a modification of the omnibus test for testing the overall fit of a parametric regression model, developed by Stute, González Manteiga, and Presedo Quindimil (1998, Journal of the American Statistical Association93, 141-149), and is based on what we call censoring consistent residuals. In addition, we develop residual plots that can be used to identify the correct functional forms of covariates. We compare our test with the functional form test of Lin, Wei, and Ying (1993, Biometrika80, 557-572) in a simulation study. The practical application of the proposed residuals and functional form test is illustrated using both a simulated data set and a real data set.  相似文献   

16.
Neurobehavioral tests are used to assess early neonatal behavioral functioning and detect effects of prenatal and perinatal events. However, common measurement and data collection methods create specific data features requiring thoughtful statistical analysis. Assessment response measurements are often ordinal scaled, not interval scaled; the magnitude of the physical response may not directly correlate with the underlying state of developmental maturity; and a subject's assessment record may be censored. Censoring occurs when the milestone is exhibited at the first test (left censoring), when the milestone is not exhibited before the end of the study (right censoring), or when the exact age of attaining the milestone is uncertain due to irregularly spaced test sessions or missing data (interval censoring). Such milestone data is best analyzed using survival analysis methods. Two methods are contrasted: the non-parametric Kaplan-Meier estimator and the fully parametric interval censored regression. The methods represent the spectrum of survival analyses in terms of parametric assumptions, ability to handle simultaneous testing of multiple predictors, and accommodation of different types of censoring. Both methods were used to assess birth weight status and sex effects on 14 separate test items from assessments on 255 healthy pigtailed macaques. The methods gave almost identical results. Compared to the normal birth weight group, the low birth weight group had significantly delayed development on all but one test item. Within the low birth weight group, males had significantly delayed development for some responses relative to females.  相似文献   

17.
In ophthalmologic studies, measurements obtained from both eyes of an individual are often highly correlated. Ignoring the correlation could lead to incorrect inferences. An asymptotic method was proposed by Tang and others (2008) for testing equality of proportions between two groups under Rosner''s model. In this article, we investigate three testing procedures for general g ≥ 2 groups. Our simulation results show the score testing procedure usually produces satisfactory type I error control and has reasonable power. The three test procedures get closer when sample size becomes larger. Examples from ophthalmologic studies are used to illustrate our proposed methods.  相似文献   

18.
The concept of adaptive two‐stage designs is applied to the problem of testing the equality of several normal means against an ordered (monotone) alternative. The likelihood‐ratio‐test proposed by Bartholomew is known to have favorable power properties when testing against a monotonic trend. Tests based on contrasts provide a flexible way to incorporate available information regarding the pattern of the unknown true means through appropriate specification of the scores. The basic idea of the presented concept is the combination of Bartholomew 's test (first stage) with an “adaptive score test” (second stage) which utilizes the information resulting from isotonic regression estimation at the first stage. In a Monte Carlo simulation study the adaptive scoring procedure is compared to the non‐adaptive two‐stage procedure using the Bartholomew test at both stages. We found that adaptive scoring may improve the power of the two stage design, in particular if the sample size at the first stage is considerably larger than at the second stage.  相似文献   

19.
Many group-sequential test procedures have been proposed to meet the ethical need for interim analyses. All of these papers, however, focus their discussion on the situation where there are only one standard control and one experimental treatment. In this paper, we consider a trial with one standard control, but with more than one experimental treatment. We have developed a group-sequential test procedure to accommodate any finite number of experimental treatments. To facilitate the practical application of the proposed test procedure, on the basis of Monte Carlo simulation, we have derived the critical values of α-levels equal to 0.01, 0.05 and 0.10 for the number of experimental treatments ranging from 2 to 4 and the number of multiple group sequential analysis ranging from 1 to 10. Comparing with a single non-sequential analysis that has a reasonable power (say, 0.80), we have demonstrated that the application of the proposed test procedure may substantially reduce the required sample size without seriously sacrificing the original power.  相似文献   

20.
Hsieh JJ  Ding AA  Wang W 《Biometrics》2011,67(3):719-729
Summary Recurrent events data are commonly seen in longitudinal follow‐up studies. Dependent censoring often occurs due to death or exclusion from the study related to the disease process. In this article, we assume flexible marginal regression models on the recurrence process and the dependent censoring time without specifying their dependence structure. The proposed model generalizes the approach by Ghosh and Lin (2003, Biometrics 59, 877–885). The technique of artificial censoring provides a way to maintain the homogeneity of the hypothetical error variables under dependent censoring. Here we propose to apply this technique to two Gehan‐type statistics. One considers only order information for pairs whereas the other utilizes additional information of observed censoring times available for recurrence data. A model‐checking procedure is also proposed to assess the adequacy of the fitted model. The proposed estimators have good asymptotic properties. Their finite‐sample performances are examined via simulations. Finally, the proposed methods are applied to analyze the AIDS linked to the intravenous experiences cohort data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号