首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
This paper discusses the application of randomization tests to censored survival distributions. The three types of censoring considered are those designated by MILLER (1981) as Type 1 (fixed time termination), Type 2 (termination of experiment at r-th failure), and random censoring. Examples utilize the Gehan scoring procedure. Randomization tests for which computer programs already exist can be applied to a variety of experimental designs, regardless of the presence of censored observations.  相似文献   

3.
The asymptotic equivalence of nonparametric tests and parametric tests based on rank-transformed data (CONOVER and IMAN , 1981) can be extended to the case of censoring. This paper presents generalized rank transformations for analyses of censored data, of interval-censored data and of survival data with uncertain causes of death. A Monte Carlo study and an analysis of leukemia remission times demonstrate excellent agreement of suggested procedures with GEHAN 'S (1965) and PRENTICE 'S (1978) tests.  相似文献   

4.
The two‐sided Simes test is known to control the type I error rate with bivariate normal test statistics. For one‐sided hypotheses, control of the type I error rate requires that the correlation between the bivariate normal test statistics is non‐negative. In this article, we introduce a trimmed version of the one‐sided weighted Simes test for two hypotheses which rejects if (i) the one‐sided weighted Simes test rejects and (ii) both p‐values are below one minus the respective weighted Bonferroni adjusted level. We show that the trimmed version controls the type I error rate at nominal significance level α if (i) the common distribution of test statistics is point symmetric and (ii) the two‐sided weighted Simes test at level 2α controls the level. These assumptions apply, for instance, to bivariate normal test statistics with arbitrary correlation. In a simulation study, we compare the power of the trimmed weighted Simes test with the power of the weighted Bonferroni test and the untrimmed weighted Simes test. An additional result of this article ensures type I error rate control of the usual weighted Simes test under a weak version of the positive regression dependence condition for the case of two hypotheses. This condition is shown to apply to the two‐sided p‐values of one‐ or two‐sample t‐tests for bivariate normal endpoints with arbitrary correlation and to the corresponding one‐sided p‐values if the correlation is non‐negative. The Simes test for such types of bivariate t‐tests has not been considered before. According to our main result, the trimmed version of the weighted Simes test then also applies to the one‐sided bivariate t‐test with arbitrary correlation.  相似文献   

5.
6.
Heinze G  Gnant M  Schemper M 《Biometrics》2003,59(4):1151-1157
The asymptotic log-rank and generalized Wilcoxon tests are the standard procedures for comparing samples of possibly censored survival times. For comparison of samples of very different sizes, an exact test is available that is based on a complete permutation of log-rank or Wilcoxon scores. While the asymptotic tests do not keep their nominal sizes if sample sizes differ substantially, the exact complete permutation test requires equal follow-up of the samples. Therefore, we have developed and present two new exact tests also suitable for unequal follow-up. The first of these is an exact analogue of the asymptotic log-rank test and conditions on observed risk sets, whereas the second approach permutes survival times while conditioning on the realized follow-up in each group. In an empirical study, we compare the new procedures with the asymptotic log-rank test, the exact complete permutation test, and an earlier proposed approach that equalizes the follow-up distributions using artificial censoring. Results confirm highly satisfactory performance of the exact procedure conditioning on realized follow-up, particularly in case of unequal follow-up. The advantage of this test over other options of analysis is finally exemplified in the analysis of a breast cancer study.  相似文献   

7.
To compare two exponential distributions with or without censoring, two different statistics are often used; one is the F test proposed by COX (1953) and the other is based on the efficient score procedure. In this paper, the relationship between these tests is investigated and it is shown that the efficient score test is a large-sample approximation of the F test.  相似文献   

8.
Summary The standard estimator for the cause‐specific cumulative incidence function in a competing risks setting with left truncated and/or right censored data can be written in two alternative forms. One is a weighted empirical cumulative distribution function and the other a product‐limit estimator. This equivalence suggests an alternative view of the analysis of time‐to‐event data with left truncation and right censoring: individuals who are still at risk or experienced an earlier competing event receive weights from the censoring and truncation mechanisms. As a consequence, inference on the cumulative scale can be performed using weighted versions of standard procedures. This holds for estimation of the cause‐specific cumulative incidence function as well as for estimation of the regression parameters in the Fine and Gray proportional subdistribution hazards model. We show that, with the appropriate filtration, a martingale property holds that allows deriving asymptotic results for the proportional subdistribution hazards model in the same way as for the standard Cox proportional hazards model. Estimation of the cause‐specific cumulative incidence function and regression on the subdistribution hazard can be performed using standard software for survival analysis if the software allows for inclusion of time‐dependent weights. We show the implementation in the R statistical package. The proportional subdistribution hazards model is used to investigate the effect of calendar period as a deterministic external time varying covariate, which can be seen as a special case of left truncation, on AIDS related and non‐AIDS related cumulative mortality.  相似文献   

9.
Summary Cook, Gold, and Li (2007, Biometrics 63, 540–549) extended the Kulldorff (1997, Communications in Statistics 26, 1481–1496) scan statistic for spatial cluster detection to survival‐type observations. Their approach was based on the score statistic and they proposed a permutation distribution for the maximum of score tests. The score statistic makes it possible to apply the scan statistic idea to models including explanatory variables. However, we show that the permutation distribution requires strong assumptions of independence between potential cluster and both censoring and explanatory variables. In contrast, we present an approach using the asymptotic distribution of the maximum of score statistics in a manner not requiring these assumptions.  相似文献   

10.
Summary We discuss the issue of identifiability of models for multiple dichotomous diagnostic tests in the absence of a gold standard (GS) test. Data arise as multinomial or product‐multinomial counts depending upon the number of populations sampled. Models are generally posited in terms of population prevalences, test sensitivities and specificities, and test dependence terms. It is commonly believed that if the degrees of freedom in the data meet or exceed the number of parameters in a fitted model then the model is identifiable. Goodman (1974, Biometrika 61, 215–231) established that this was not the case a long time ago. We discuss currently available models for multiple tests and argue in favor of an extension of a model that was developed by Dendukuri and Joseph (2001, Biometrics 57, 158–167). Subsequently, we further develop Goodman's technique, and make geometric arguments to give further insight into the nature of models that lack identifiability. We present illustrations using simulated and real data.  相似文献   

11.
In order to study family‐based association in the presence of linkage, we extend a generalized linear mixed model proposed for genetic linkage analysis (Lebrec and van Houwelingen (2007), Human Heredity 64 , 5–15) by adding a genotypic effect to the mean. The corresponding score test is a weighted family‐based association tests statistic, where the weight depends on the linkage effect and on other genetic and shared environmental effects. For testing of genetic association in the presence of gene–covariate interaction, we propose a linear regression method where the family‐specific score statistic is regressed on family‐specific covariates. Both statistics are straightforward to compute. Simulation results show that adjusting the weight for the within‐family variance structure may be a powerful approach in the presence of environmental effects. The test statistic for genetic association in the presence of gene–covariate interaction improved the power for detecting association. For illustration, we analyze the rheumatoid arthritis data from GAW15. Adjusting for smoking and anti‐cyclic citrullinated peptide increased the significance of the association with the DR locus.  相似文献   

12.
Recently, Brown , Hwang , and Munk (1998) proposed and unbiased test for the average equivalence problem which improves noticeably in power on the standard two one‐sided tests procedure. Nevertheless, from a practical point of view there are some objections against the use of this test which are mainly adressed to the ‘unusual’ shape of the critical region. We show that every unbiased test has a critical region with such an ‘unusual’ shape. Therefore, we discuss three (biased) modifications of the unbiased test. We come to the conclusion that a suitable modification represents a good compromise between a most powerful test and a test with an appealing shape of its critical region. In order to perform these tests figures are given containing the rejection region. Finally, we compare all tests in an example from neurophysiology. This shows that it is beneficial to use these improved tests instead of the two one‐sided tests procedure.  相似文献   

13.
Summary This article proposes new tests to compare the vaccine and placebo groups in randomized vaccine trials when a small fraction of volunteers become infected. A simple approach that is consistent with the intent‐to‐treat principle is to assign a score, say W, equal to 0 for the uninfecteds and some postinfection outcome X > 0 for the infecteds. One can then test the equality of this skewed distribution of W between the two groups. This burden of illness (BOI) test was introduced by Chang, Guess, and Heyse (1994, Statistics in Medicine 13 , 1807–1814). If infections are rare, the massive number of 0s in each group tends to dilute the vaccine effect and this test can have poor power, particularly if the X's are not close to zero. Comparing X in just the infecteds is no longer a comparison of randomized groups and can produce misleading conclusions. Gilbert, Bosch, and Hudgens (2003, Biometrics 59 , 531–541) and Hudgens, Hoering, and Self (2003, Statistics in Medicine 22 , 2281–2298) introduced tests of the equality of X in a subgroup—the principal stratum of those “doomed” to be infected under either randomization assignment. This can be more powerful than the BOI approach, but requires unexaminable assumptions. We suggest new “chop‐lump” Wilcoxon and t‐tests (CLW and CLT) that can be more powerful than the BOI tests in certain situations. When the number of volunteers in each group are equal, the chop‐lump tests remove an equal number of zeros from both groups and then perform a test on the remaining W's, which are mostly >0. A permutation approach provides a null distribution. We show that under local alternatives, the CLW test is always more powerful than the usual Wilcoxon test provided the true vaccine and placebo infection rates are the same. We also identify the crucial role of the “gap” between 0 and the X's on power for the t‐tests. The chop‐lump tests are compared to established tests via simulation for planned HIV and malaria vaccine trials. A reanalysis of the first phase III HIV vaccine trial is used to illustrate the method.  相似文献   

14.
Adaptive designs were originally developed for independent and uniformly distributed p‐values. There are trial settings where independence is not satisfied or where it may not be possible to check whether it is satisfied. In these cases, the test statistics and p‐values of each stage may be dependent. Since the probability of a type I error for a fixed adaptive design depends on the true dependence structure between the p‐values of the stages, control of the type I error rate might be endangered if the dependence structure is not taken into account adequately. In this paper, we address the problem of controlling the type I error rate in two‐stage adaptive designs if any dependence structure between the test statistics of the stages is admitted (worst case scenario). For this purpose, we pursue a copula approach to adaptive designs. For two‐stage adaptive designs without futility stop, we derive the probability of a type I error in the worst case, that is for the most adverse dependence structure between the p‐values of the stages. Explicit analytical considerations are performed for the class of inverse normal designs. A comparison with the significance level for independent and uniformly distributed p‐values is performed. For inverse normal designs without futility stop and equally weighted stages, it turns out that correcting for the worst case is too conservative as compared to a simple Bonferroni design.  相似文献   

15.
We consider uniformly most powerful (UMP) as well as uniformly most powerful unbiased (UMPU) tests and their non‐randomized versions for certain hypotheses concerning a binomial parameter. It will be shown that the power function of a UMP(U)‐test based on sample size n can coincide on the entire parameter space with the power function of the corresponding test based on sample size n + 1. A complete characterization of this paradox will be derived. Apart some exceptional cases for two‐sided tests and equivalence tests the paradox appears if and only if a test based on sample size n is non‐randomized.  相似文献   

16.
Food web structure and dynamics depend on relationships between body sizes of predators and their prey. Species‐based and community‐wide estimates of preferred and realized predator–prey mass ratios (PPMR) are required inputs to size‐based size spectrum models of marine communities, food webs, and ecosystems. Here, we clarify differences between PPMR definitions in different size spectrum models, in particular differences between PPMR measurements weighting prey abundance in individual predators by biomass (rbio) and numbers (rnum). We argue that the former weighting generates PPMR as usually conceptualized in equilibrium (static) size spectrum models while the latter usually applies to dynamic models. We use diet information from 170,689 individuals of 34 species of fish in Alaskan marine ecosystems to calculate both PPMR metrics. Using hierarchical models, we examine how explained variance in these metrics changed with predator body size, predator taxonomic resolution, and spatial resolution. In the hierarchical analysis, variance in both metrics emerged primarily at the species level and substantially less variance was associated with other (higher) taxonomic levels or with spatial resolution. This suggests that changes in species composition are the main drivers of community‐wide mean PPMR. At all levels of analysis, relationships between weighted mean rbio or weighted mean rnum and predator mass tended to be dome‐shaped. Weighted mean rnum values, for species and community‐wide, were approximately an order of magnitude higher than weighted mean rbio, reflecting the consistent numeric dominance of small prey in predator diets. As well as increasing understanding of the drivers of variation in PPMR and providing estimates of PPMR in the north Pacific Ocean, our results demonstrate that that rbio or rnum, as well as their corresponding weighted means for any defined group of predators, are not directly substitutable. When developing equilibrium size‐based models based on bulk energy flux or comparing PPMR estimates derived from the relationship between body mass and trophic level with those based on diet analysis, weighted mean rbio is a more appropriate measure of PPMR. When calibrating preference PPMR in dynamic size spectrum models then weighted mean rnum will be a more appropriate measure of PPMR.  相似文献   

17.
Objective: To test the hypothesis that low‐income African‐American preschool children would have a higher BMI if their mothers reported greater “restriction” and “control” in feeding and if mothers reported that children showed greater “food responsiveness” and “desire to drink.” In addition, to test whether higher maternal “pressure to eat” would be associated with lower child BMI. Research Methods and Procedures: A questionnaire was completed by 296 low‐income African‐American mothers of preschool children. It assessed three constructs on maternal feeding strategies (“restriction,” “pressure to eat,” and “control”) and two on child eating behaviors (“food responsiveness” and “desire to drink”). Children's BMI was measured, and mothers’ BMI was self‐reported. Results: The mean (standard deviation) BMI z‐score of the children was 0.34 (1.5), and 44% of the mothers were obese (BMI ≥30 kg/m2). Only maternal “pressure to eat” had a significant overall association with child BMI z‐score (r = ?0.16, p < 0.01). Both maternal “restriction” and “control” were positively associated with children's BMI z‐score in the case of obese mothers (r = 0.20, p = 0.03 and r = 0.24, p = 0.007, respectively), but this was not so in the case of non‐obese mothers (r = ?0.16, p = 0.05 and r = ?0.07, p = 0.39, respectively). Discussion: Among low‐income African Americans, the positive association between maternal restriction and control in feeding and their preschoolers’ BMI was limited to obese mothers. Relations between parent feeding strategies and child weight status in this population may differ on the basis of maternal weight status.  相似文献   

18.
In case‐parents trios design, the association between a multi‐allelic candidate‐gene and a disease can be detected by using maximum of score tests (max‐score) when the mode of inheritance is known. We apply the maximum of the max‐score statistics and the maximum of likelihood ratio statistics when the genetic model is unknown and examine their robust properties compared to max‐score statistics. The simulation results demonstrate that the two maximum robust tests are more efficacious and robust across all genetic models compared with the three max‐score tests. Moreover, in most situations, the maximum of the max‐score tests seems to be more powerful than the maximum of the likelihood ratio tests. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

19.

Phylogenetic networks are a type of leaf-labelled, acyclic, directed graph used by biologists to represent the evolutionary history of species whose past includes reticulation events. A phylogenetic network is tree–child if each non-leaf vertex is the parent of a tree vertex or a leaf. Up to a certain equivalence, it has been recently shown that, under two different types of weightings, edge-weighted tree–child networks are determined by their collection of distances between each pair of taxa. However, the size of these collections can be exponential in the size of the taxa set. In this paper, we show that, if we have no “shortcuts”, that is, the networks are normal, the same results are obtained with only a quadratic number of inter-taxa distances by using the shortest distance between each pair of taxa. The proofs are constructive and give cubic-time algorithms in the size of the taxa sets for building such weighted networks.

  相似文献   

20.
This paper applies the inverse probability weighted least‐squares method to predict total medical cost in the presence of censored data. Since survival time and medical costs may be subject to right censoring and therefore are not always observable, the ordinary least‐squares approach cannot be used to assess the effects of explanatory variables. We demonstrate how inverse probability weighted least‐squares estimation provides consistent asymptotic normal coefficients with easily computable standard errors. In addition, to assess the effect of censoring on coefficients, we develop a test comparing ordinary least‐squares and inverse probability weighted least‐squares estimators. We demonstrate the methods developed by applying them to the estimation of cancer costs using Medicare claims data. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号