首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Semiparametric smoothing for discrete data   总被引:2,自引:0,他引:2  
FADDY  M. J.; JONES  M. C. 《Biometrika》1998,85(1):131-138
  相似文献   

2.
Semiparametric regression analysis for clustered failure time data   总被引:1,自引:0,他引:1  
Cai  T.; Wei  L. J.; Wilcox  M. 《Biometrika》2000,87(4):867-878
  相似文献   

3.
Yu Z  Lin X  Tu W 《Biometrics》2012,68(2):429-436
We consider frailty models with additive semiparametric covariate effects for clustered failure time data. We propose a doubly penalized partial likelihood (DPPL) procedure to estimate the nonparametric functions using smoothing splines. We show that the DPPL estimators could be obtained from fitting an augmented working frailty model with parametric covariate effects, whereas the nonparametric functions being estimated as linear combinations of fixed and random effects, and the smoothing parameters being estimated as extra variance components. This approach allows us to conveniently estimate all model components within a unified frailty model framework. We evaluate the finite sample performance of the proposed method via a simulation study, and apply the method to analyze data from a study of sexually transmitted infections (STI).  相似文献   

4.
5.
6.
Statistical analysis of longitudinal data often involves modeling treatment effects on clinically relevant longitudinal biomarkers since an initial event (the time origin). In some studies including preventive HIV vaccine efficacy trials, some participants have biomarkers measured starting at the time origin, whereas others have biomarkers measured starting later with the time origin unknown. The semiparametric additive time-varying coefficient model is investigated where the effects of some covariates vary nonparametrically with time while the effects of others remain constant. Weighted profile least squares estimators coupled with kernel smoothing are developed. The method uses the expectation maximization approach to deal with the censored time origin. The Kaplan–Meier estimator and other failure time regression models such as the Cox model can be utilized to estimate the distribution and the conditional distribution of left censored event time related to the censored time origin. Asymptotic properties of the parametric and nonparametric estimators and consistent asymptotic variance estimators are derived. A two-stage estimation procedure for choosing weight is proposed to improve estimation efficiency. Numerical simulations are conducted to examine finite sample properties of the proposed estimators. The simulation results show that the theory and methods work well. The efficiency gain of the two-stage estimation procedure depends on the distribution of the longitudinal error processes. The method is applied to analyze data from the Merck 023/HVTN 502 Step HIV vaccine study.  相似文献   

7.
We present an approach for analyzing internal dependencies in counting processes. This covers the case with repeated events on each of a number of individuals, and more generally, the situation where several processes are observed for each individual. We define dynamic covariates, i.e., covariates depending on the past of the processes. The statistical analysis is performed mainly by the nonparametric additive approach. This yields a method for analyzing multivariate survival data, which is an alternative to the frailty approach. We present cumulative regression plots, statistical tests, residual plots, and a hat matrix plot for studying outliers. A program in R and S-PLUS for analyzing survival data with the additive regression model is available on the web site http://www.med.uio.no/imb/stat/addreg. The program has been developed to fit the counting process framework.  相似文献   

8.
Roark DE 《Biophysical chemistry》2004,108(1-3):121-126
Biophysical chemistry experiments, such as sedimentation-equilibrium analyses, require computational techniques to reduce the effects of random errors of the measurement process. The existing approaches have primarily relied on assumption of polynomial models and least-squares approximation. Such models by constraining the data to remove random fluctuations may distort the data and cause loss of information. The better the removal of random errors the greater is the likely introduction of systematic errors through the constraining fit itself. An alternative technique, reverse smoothing, is suggested that makes use of a more model-free approach of exponential smoothing of the first derivative. Exponential smoothing approaches have been generally unsatisfactory because they introduce significant data lag. The approaches given here compensates for the lag defect and appears promising for the smoothing of many experimental data sequences, including the macromolecular concentration data generated by sedimentation-equilibria experiments. Test results on simulated sedimentation-equilibrium data indicate that a 4-fold reduction in error may be typical over standard analyses techniques.  相似文献   

9.
Semiparametric regression for count data   总被引:3,自引:0,他引:3  
  相似文献   

10.
Semiparametric regression for clustered data   总被引:4,自引:0,他引:4  
Lin  Xihong; Carroll  Raymond J. 《Biometrika》2001,88(4):1179-1185
  相似文献   

11.
12.
Variable selection for multivariate failure time data   总被引:5,自引:0,他引:5  
  相似文献   

13.
Semiparametric analysis of zero-inflated count data   总被引:1,自引:0,他引:1  
Lam KF  Xue H  Cheung YB 《Biometrics》2006,62(4):996-1003
Medical and public health research often involve the analysis of count data that exhibit a substantially large proportion of zeros, such as the number of heart attacks and the number of days of missed primary activities in a given period. A zero-inflated Poisson regression model, which hypothesizes a two-point heterogeneity in the population characterized by a binary random effect, is generally used to model such data. Subjects are broadly categorized into the low-risk group leading to structural zero counts and high-risk (or normal) group so that the counts can be modeled by a Poisson regression model. The main aim is to identify the explanatory variables that have significant effects on (i) the probability that the subject is from the low-risk group by means of a logistic regression formulation; and (ii) the magnitude of the counts, given that the subject is from the high-risk group by means of a Poisson regression where the effects of the covariates are assumed to be linearly related to the natural logarithm of the mean of the counts. In this article we consider a semiparametric zero-inflated Poisson regression model that postulates a possibly nonlinear relationship between the natural logarithm of the mean of the counts and a particular covariate. A sieve maximum likelihood estimation method is proposed. Asymptotic properties of the proposed sieve maximum likelihood estimators are discussed. Under some mild conditions, the estimators are shown to be asymptotically efficient and normally distributed. Simulation studies were carried out to investigate the performance of the proposed method. For illustration purpose, the method is applied to a data set from a public health survey conducted in Indonesia where the variable of interest is the number of days of missed primary activities due to illness in a 4-week period.  相似文献   

14.
Joint analysis of recurrent and nonrecurrent terminal events has attracted substantial attention in literature. However, there lacks formal methodology for such analysis when the event time data are on discrete scales, even though some modeling and inference strategies have been developed for discrete-time survival analysis. We propose a discrete-time joint modeling approach for the analysis of recurrent and terminal events where the two types of events may be correlated with each other. The proposed joint modeling assumes a shared frailty to account for the dependence among recurrent events and between the recurrent and the terminal terminal events. Also, the joint modeling allows for time-dependent covariates and rich families of transformation models for the recurrent and terminal events. A major advantage of our approach is that it does not assume a distribution for the frailty, nor does it assume a Poisson process for the analysis of the recurrent event. The utility of the proposed analysis is illustrated by simulation studies and two real applications, where the application to the biochemists' rank promotion data jointly analyzes the biochemists' citation numbers and times to rank promotion, and the application to the scleroderma lung study data jointly analyzes the adverse events and off-drug time among patients with the symptomatic scleroderma-related interstitial lung disease.  相似文献   

15.
Summary .   We develop methods for competing risks analysis when individual event times are correlated within clusters. Clustering arises naturally in clinical genetic studies and other settings. We develop a nonparametric estimator of cumulative incidence, and obtain robust pointwise standard errors that account for within-cluster correlation. We modify the two-sample Gray and Pepe–Mori tests for correlated competing risks data, and propose a simple two-sample test of the difference in cumulative incidence at a landmark time. In simulation studies, our estimators are asymptotically unbiased, and the modified test statistics control the type I error. The power of the respective two-sample tests is differentially sensitive to the degree of correlation; the optimal test depends on the alternative hypothesis of interest and the within-cluster correlation. For purposes of illustration, we apply our methods to a family-based prospective cohort study of hereditary breast/ovarian cancer families. For women with BRCA1 mutations, we estimate the cumulative incidence of breast cancer in the presence of competing mortality from ovarian cancer, accounting for significant within-family correlation.  相似文献   

16.
Goetghebeur E  Ryan L 《Biometrics》2000,56(4):1139-1144
We propose a semiparametric approach to the proportional hazards regression analysis of interval-censored data. An EM algorithm based on an approximate likelihood leads to an M-step that involves maximizing a standard Cox partial likelihood to estimate regression coefficients and then using the Breslow estimator for the unknown baseline hazards. The E-step takes a particularly simple form because all incomplete data appear as linear terms in the complete-data log likelihood. The algorithm of Turnbull (1976, Journal of the Royal Statistical Society, Series B 38, 290-295) is used to determine times at which the hazard can take positive mass. We found multiple imputation to yield an easily computed variance estimate that appears to be more reliable than asymptotic methods with small to moderately sized data sets. In the right-censored survival setting, the approach reduces to the standard Cox proportional hazards analysis, while the algorithm reduces to the one suggested by Clayton and Cuzick (1985, Applied Statistics 34, 148-156). The method is illustrated on data from the breast cancer cosmetics trial, previously analyzed by Finkelstein (1986, Biometrics 42, 845-854) and several subsequent authors.  相似文献   

17.
Additive hazards model with multivariate failure time data   总被引:2,自引:0,他引:2  
Yin  Guosheng; Cai  Jianwen 《Biometrika》2004,91(4):801-818
  相似文献   

18.
Yin G  Cai J 《Biometrics》2005,61(1):151-161
As an alternative to the mean regression model, the quantile regression model has been studied extensively with independent failure time data. However, due to natural or artificial clustering, it is common to encounter multivariate failure time data in biomedical research where the intracluster correlation needs to be accounted for appropriately. For right-censored correlated survival data, we investigate the quantile regression model and adapt an estimating equation approach for parameter estimation under the working independence assumption, as well as a weighted version for enhancing the efficiency. We show that the parameter estimates are consistent and asymptotically follow normal distributions. The variance estimation using asymptotic approximation involves nonparametric functional density estimation. We employ the bootstrap and perturbation resampling methods for the estimation of the variance-covariance matrix. We examine the proposed method for finite sample sizes through simulation studies, and illustrate it with data from a clinical trial on otitis media.  相似文献   

19.
20.
Analysis of failure time data with dependent interval censoring   总被引:1,自引:0,他引:1  
This article develops a method for the analysis of screening data for which the chance of being screened is dependent on the event of interest (informative censoring). Because not all subjects make all screening visits, the data on the failure of interest is interval censored. We propose a model that will properly adjust for the dependence to obtain an unbiased estimate of the nonparametric failure time function, and we provide an extension for applying the method for estimation of the regression parameters from a (discrete time) proportional hazards regression model. The method is applied on a data set from an observational study of cytomegalovirus shedding in a population of HIV-infected subjects who participated in a trial conducted by the AIDS Clinical Trials Group.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号