首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ten Have TR  Localio AR 《Biometrics》1999,55(4):1022-1029
We extend an approach for estimating random effects parameters under a random intercept and slope logistic regression model to include standard errors, thereby including confidence intervals. The procedure entails numerical integration to yield posterior empirical Bayes (EB) estimates of random effects parameters and their corresponding posterior standard errors. We incorporate an adjustment of the standard error due to Kass and Steffey (KS; 1989, Journal of the American Statistical Association 84, 717-726) to account for the variability in estimating the variance component of the random effects distribution. In assessing health care providers with respect to adult pneumonia mortality, comparisons are made with the penalized quasi-likelihood (PQL) approximation approach of Breslow and Clayton (1993, Journal of the American Statistical Association 88, 9-25) and a Bayesian approach. To make comparisons with an EB method previously reported in the literature, we apply these approaches to crossover trials data previously analyzed with the estimating equations EB approach of Waclawiw and Liang (1994, Statistics in Medicine 13, 541-551). We also perform simulations to compare the proposed KS and PQL approaches. These two approaches lead to EB estimates of random effects parameters with similar asymptotic bias. However, for many clusters with small cluster size, the proposed KS approach does better than the PQL procedures in terms of coverage of nominal 95% confidence intervals for random effects estimates. For large cluster sizes and a few clusters, the PQL approach performs better than the KS adjustment. These simulation results agree somewhat with those of the data analyses.  相似文献   

2.
Noncompliance is a common problem in experiments involving randomized assignment of treatments, and standard analyses based on intention-to-treat or treatment received have limitations. An attractive alternative is to estimate the Complier-Average Causal Effect (CACE), which is the average treatment effect for the subpopulation of subjects who would comply under either treatment (Angrist, Imbens, and Rubin, 1996, Journal of American Statistical Association 91, 444-472). We propose an extended general location model to estimate the CACE from data with noncompliance and missing data in the outcome and in baseline covariates. Models for both continuous and categorical outcomes and ignorable and latent ignorable (Frangakis and Rubin, 1999, Biometrika 86, 365-379) missing-data mechanisms are developed. Inferences for the models are based on the EM algorithm and Bayesian MCMC methods. We present results from simulations that investigate sensitivity to model assumptions and the influence of missing-data mechanism. We also apply the method to the data from a job search intervention for unemployed workers.  相似文献   

3.
We consider a nonparametric (NP) approach to the analysis of repeated measures designs with censored data. Using the NP model of Akritas and Arnold (1994, Journal of the American Statistical Association 89, 336-343) for marginal distributions, we present test procedures for the NP hypotheses of no main effects, no interaction, and no simple effects. This extends the existing NP methodology for such designs (Wei and Lachin, 1984, Journal of the American Statistical Association 79, 653-661). The procedures do not require any modeling assumptions and should be useful in cases where the assumptions of proportional hazards or location shift fail to be satisfied. The large-sample distribution of the test statistics is based on an i.i.d. representation for Kaplan-Meier integrals. The testing procedures apply also to ordinal data and to data with ties. Useful small-sample approximations are presented, and their performance is examined in a simulation study. Finally, the methodology is illustrated with two real life examples, one with censored and one with missing data. It is indicated that one of the data sets does not conform to any set of assumptions underlying the available methods and also that the present method provides a useful additional analysis even when data sets conform to modeling assumptions.  相似文献   

4.
5.
Chang CC  Weissfeld LA 《Biometrics》1999,55(4):1114-1119
We discuss two diagnostic methods for assessing the accuracy of the normal approximated confidence region to the likelihood-based confidence region for the Cox proportional hazards model with censored data. The proposed diagnostic methods are extensions of the contour measures of Hodges (1987, Journal of the American Statistical Association 82, 149-154) and Cook and Tsai (1990, Journal of the American Statistical Association 85, 770-777) and the curvature measures of Jennings (1986, Journal of the American Statistical Association 81, 471-476) and Cook and Tsai (1990). These methods are also illustrated in a study of hepatocyte growth factor in patients with lung cancer and a Mayo Clinic randomized study of participants with primary biliary cirrhosis.  相似文献   

6.
J Raz 《Biometrics》1989,45(3):851-871
The mixed-model analysis of variance (ANOVA), which is commonly applied to repeated measurements taken over time, depends on specialized assumptions about the error distribution and fails to exploit information contained in the ordering of the data points over time. This paper describes a procedure that overcomes these disadvantages while preserving familiar features of the mixed-model ANOVA. Group profiles are estimated by nonparametric smoothing of observed mean profiles. Group and time main effects, and the group by time interaction effect, are tested using randomization tests. Results of Zerbe (1979, Journal of the American Statistical Association 74, 215-221) are used to construct F-test approximations for the randomization tests of the group and group by time effects. A new approximate F-test for time effect is proposed. A simulation study demonstrates that the approximations perform well and that smoothing increases the power of the tests for time main effect and group by time interaction. The procedure is applied to data on hormone levels in cows.  相似文献   

7.
Song X  Huang Y 《Biometrics》2005,61(3):702-714
In the presence of covariate measurement error with the proportional hazards model, several functional modeling methods have been proposed. These include the conditional score estimator (Tsiatis and Davidian, 2001, Biometrika 88, 447-458), the parametric correction estimator (Nakamura, 1992, Biometrics 48, 829-838), and the nonparametric correction estimator (Huang and Wang, 2000, Journal of the American Statistical Association 95, 1209-1219) in the order of weaker assumptions on the error. Although they are all consistent, each suffers from potential difficulties with small samples and substantial measurement error. In this article, upon noting that the conditional score and parametric correction estimators are asymptotically equivalent in the case of normal error, we investigate their relative finite sample performance and discover that the former is superior. This finding motivates a general refinement approach to parametric and nonparametric correction methods. The refined correction estimators are asymptotically equivalent to their standard counterparts, but have improved numerical properties and perform better when the standard estimates do not exist or are outliers. Simulation results and application to an HIV clinical trial are presented.  相似文献   

8.
Variable Selection for Clustering with Gaussian Mixture Models   总被引:3,自引:0,他引:3  
Summary .  This article is concerned with variable selection for cluster analysis. The problem is regarded as a model selection problem in the model-based cluster analysis context. A model generalizing the model of Raftery and Dean (2006,  Journal of the American Statistical Association   101, 168–178) is proposed to specify the role of each variable. This model does not need any prior assumptions about the linear link between the selected and discarded variables. Models are compared with Bayesian information criterion. Variable role is obtained through an algorithm embedding two backward stepwise algorithms for variable selection for clustering and linear regression. The model identifiability is established and the consistency of the resulting criterion is proved under regularity conditions. Numerical experiments on simulated datasets and a genomic application highlight the interest of the procedure.  相似文献   

9.
The modeling of lifetime (i.e. cumulative) medical cost data in the presence of censored follow-up is complicated by induced informative censoring, rendering standard survival analysis tools invalid. With few exceptions, recently proposed nonparametric estimators for such data do not extend easily to handle covariate information. We propose to model the hazard function for lifetime cost endpoints using an adaptation of the HARE methodology (Kooperberg, Stone, and Truong, Journal of the American Statistical Association, 1995, 90, 78-94). Linear splines and their tensor products are used to adaptively build a model that incorporates covariates and covariate-by-cost interactions without restrictive parametric assumptions. The informative censoring problem is handled using inverse probability of censoring weighted estimating equations. The proposed method is illustrated using simulation and also with data on the cost of dialysis for patients with end-stage renal disease.  相似文献   

10.
Zhou H  Chen J  Cai J 《Biometrics》2002,58(2):352-360
We study a semiparametric estimation method for the random effects logistic regression when there is auxiliary covariate information about the main exposure variable. We extend the semiparametric estimator of Pepe and Fleming (1991, Journal of the American Statistical Association 86, 108-113) to the random effects model using the best linear unbiased prediction approach of Henderson (1975, Biometrics 31, 423-448). The method can be used to handle the missing covariate or mismeasured covariate data problems in a variety of real applications. Simulation study results show that the proposed method outperforms the existing methods. We analyzed a data set from the Collaborative Perinatal Project using the proposed method and found that the use of DDT increases the risk of preterm births among U.S. children.  相似文献   

11.
In longitudinal studies and in clustered situations often binary and continuous response variables are observed and need to be modeled together. In a recent publication Dunson, Chen, and Harry (2003, Biometrics 59, 521-530) (DCH) propose a Bayesian approach for joint modeling of cluster size and binary and continuous subunit-specific outcomes and illustrate this approach with a developmental toxicity data example. In this note we demonstrate how standard software (PROC NLMIXED in SAS) can be used to obtain maximum likelihood estimates in an alternative parameterization of the model with a single cluster-level factor considered by DCH for that example. We also suggest that a more general model with additional cluster-level random effects provides a better fit to the data set. An apparent discrepancy between the estimates obtained by DCH and the estimates obtained earlier by Catalano and Ryan (1992, Journal of the American Statistical Association 87, 651-658) is also resolved. The issue of bias in inferences concerning the dose effect when cluster size is ignored is discussed. The maximum-likelihood approach considered herein is applicable to general situations with multiple clustered or longitudinally measured outcomes of different type and does not require prior specification and extensive programming.  相似文献   

12.
Survival estimation using splines   总被引:1,自引:0,他引:1  
A nonparametric maximum likelihood procedure is given for estimating the survivor function from right-censored data. It approximates the hazard rate by a simple function such as a spline, with different approximations yielding different estimators. A special case is that proposed by Nelson (1969, Journal of Quality Technology 1, 27-52) and Altshuler (1970, Mathematical Biosciences 6, 1-11). The estimators are uniformly consistent and have the same asymptotic weak convergence properties as the Kaplan-Meier (1958, Journal of the American Statistical Association 53, 457-481) estimator. However, in small and in heavily censored samples, the simplest spline estimators have uniformly smaller mean squared error than do the Kaplan-Meier and Nelson-Altshuler estimators. The procedure is extended to estimate the baseline hazard rate and regression coefficients in the Cox (1972, Journal of the Royal Statistical Society, Series B 34, 187-220) proportional hazards model and is illustrated using experimental carcinogenesis data.  相似文献   

13.
Smooth random effects distribution in a linear mixed model   总被引:1,自引:0,他引:1  
Ghidey W  Lesaffre E  Eilers P 《Biometrics》2004,60(4):945-953
A linear mixed model with a smooth random effects density is proposed. A similar approach to P-spline smoothing of Eilers and Marx (1996, Statistical Science 11, 89-121) is applied to yield a more flexible estimate of the random effects density. Our approach differs from theirs in that the B-spline basis functions are replaced by approximating Gaussian densities. Fitting the model involves maximizing a penalized marginal likelihood. The best penalty parameters minimize Akaike's Information Criterion employing Gray's (1992, Journal of the American Statistical Association 87, 942-951) results. Although our method is applicable to any dimensions of the random effects structure, in this article the two-dimensional case is explored. Our methodology is conceptually simple, and it is relatively easy to fit in practice and is applied to the cholesterol data first analyzed by Zhang and Davidian (2001, Biometrics 57, 795-802). A simulation study shows that our approach yields almost unbiased estimates of the regression and the smoothing parameters in small sample settings. Consistency of the estimates is shown in a particular case.  相似文献   

14.
This article investigates an augmented inverse selection probability weighted estimator for Cox regression parameter estimation when covariate variables are incomplete. This estimator extends the Horvitz and Thompson (1952, Journal of the American Statistical Association 47, 663-685) weighted estimator. This estimator is doubly robust because it is consistent as long as either the selection probability model or the joint distribution of covariates is correctly specified. The augmentation term of the estimating equation depends on the baseline cumulative hazard and on a conditional distribution that can be implemented by using an EM-type algorithm. This method is compared with some previously proposed estimators via simulation studies. The method is applied to a real example.  相似文献   

15.
Bickel DR 《Biometrics》2011,67(2):363-370
In a novel approach to the multiple testing problem, Efron (2004, Journal of the American Statistical Association 99, 96-104; 2007a Journal of the American Statistical Association 102, 93-103; 2007b, Annals of Statistics 35, 1351-1377) formulated estimators of the distribution of test statistics or nominal p-values under a null distribution suitable for modeling the data of thousands of unaffected genes, nonassociated single-nucleotide polymorphisms, or other biological features. Estimators of the null distribution can improve not only the empirical Bayes procedure for which it was originally intended, but also many other multiple-comparison procedures. Such estimators in some cases improve the proposed multiple-comparison procedure (MCP) based on a recent non-Bayesian framework of minimizing expected loss with respect to a confidence posterior, a probability distribution of confidence levels. The flexibility of that MCP is illustrated with a nonadditive loss function designed for genomic screening rather than for validation. The merit of estimating the null distribution is examined from the vantage point of the confidence-posterior MCP (CPMCP). In a generic simulation study of genome-scale multiple testing, conditioning the observed confidence level on the estimated null distribution as an approximate ancillary statistic markedly improved conditional inference. Specifically simulating gene expression data, however, indicates that estimation of the null distribution tends to exacerbate the conservative bias that results from modeling heavy-tailed data distributions with the normal family. To enable researchers to determine whether to rely on a particular estimated null distribution for inference or decision making, an information-theoretic score is provided. As the sum of the degree of ancillarity and the degree of inferential relevance, the score reflects the balance conditioning would strike between the two conflicting terms. The CPMCP and other methods introduced are applied to gene expression microarray data.  相似文献   

16.
Freidlin B 《Biometrics》1999,55(1):264-267
By focusing on a confidence interval for a nuisance parameter, Berger and Boos (1994, Journal of the American Statistical Association 89, 1012-1016) proposed new unconditional tests. In particular, they showed that, for a 2 x 2 table, this procedure generally was more powerful than Fisher's exact test. This paper utilizes and extends their approach to obtain unconditional tests for combining several 2 x 2 tables and testing for trend and homogeneity in a 2 x K table. The unconditional procedures are compared to the conditional ones by reanalyzing some published biomedical data.  相似文献   

17.
Decady YJ  Thomas DR 《Biometrics》2000,56(3):893-896
Loughin and Scherer (1998, Biometrics 54, 630-637) investigated tests of association in two-way tables when one of the categorical variables allows for multiple-category responses from individual respondents. Standard chi-squared tests are invalid in this case, and they developed a bootstrap test procedure that provides good control of test levels under the null hypothesis. This procedure and some others that have been proposed are computationally involved and are based on techniques that are relatively unfamiliar to many practitioners. In this paper, the methods introduced by Rao and Scott (1981, Journal of the American Statistical Association 76, 221-230) for analyzing complex survey data are used to develop a simple test based on a corrected chi-squared statistic.  相似文献   

18.
Kolassa JE  Tanner MA 《Biometrics》1999,55(1):246-251
This article presents an algorithm for approximate frequentist conditional inference on two or more parameters for any regression model in the Generalized Linear Model (GLIM) family. We thereby extend highly accurate inference beyond the cases of logistic regression and contingency tables implimented in commercially available software. The method makes use of the double saddlepoint approximations of Skovgaard (1987, Journal of Applied Probability 24, 875-887) and Jensen (1992, Biometrika 79, 693-703) to the conditional cumulative distribution function of a sufficient statistic given the remaining sufficient statistics. This approximation is then used in conjunction with noniterative Monte Carlo methods to generate a sample from a distribution that approximates the joint distribution of the sufficient statistics associated with the parameters of interest conditional on the observed values of the sufficient statistics associated with the nuisance parameters. This algorithm is an alternate approach to that presented by Kolassa and Tanner (1994, Journal of the American Statistical Association 89, 697-702), in which a Markov chain is generated whose equilibrium distribution under certain regularity conditions approximates the joint distribution of interest. In Kolassa and Tanner (1994), the Gibbs sampler was used in conjunction with these univariate conditional distribution function approximations. The method of this paper does not require the construction and simulation of a Markov chain, thus avoiding the need to develop regularity conditions under which the algorithm converges and the need for the data analyst to check convergence of the particular chain. Examples involving logistic and truncated Poisson regression are presented.  相似文献   

19.
Peng J  Lee CI  Davis KA  Wang W 《Biometrics》2008,64(3):877-885
Summary .   In dose–response studies, one of the most important issues is the identification of the minimum effective dose (MED), where the MED is defined as the lowest dose such that the mean response is better than the mean response of a zero-dose control by a clinically significant difference. Dose–response curves are sometimes monotonic in nature. To find the MED, various authors have proposed step-down test procedures based on contrasts among the sample means. In this article, we improve upon the method of Marcus and Peritz (1976, Journal of the Royal Statistical Society, Series B 38 , 157–165) and implement the dose–response method of Hsu and Berger (1999, Journal of the American Statistical Association 94 , 468–482) to construct the lower confidence bound for the difference between the mean response of any nonzero-dose level and that of the control under the monotonicity assumption to identify the MED. The proposed method is illustrated by numerical examples, and simulation studies on power comparisons are presented.  相似文献   

20.
The variance component ratio delta = (theta 1 + d theta 2)/theta 3 is of interest in many fields of application. This paper proposes and compares two methods for constructing confidence intervals on delta. The better method is compared with a method proposed by Graybill and Wang (1979, Journal of the American Statistical Association 74, 368-374) for the special problem of constructing an interval on sigma 2E/(sigma 2A + sigma 2B + sigma 2E) in a two-fold nested design. An example concerning a heritability study is provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号