首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
L. Xue  L. Wang  A. Qu 《Biometrics》2010,66(2):393-404
Summary We propose a new estimation method for multivariate failure time data using the quadratic inference function (QIF) approach. The proposed method efficiently incorporates within‐cluster correlations. Therefore, it is more efficient than those that ignore within‐cluster correlation. Furthermore, the proposed method is easy to implement. Unlike the weighted estimating equations in Cai and Prentice (1995, Biometrika 82 , 151–164), it is not necessary to explicitly estimate the correlation parameters. This simplification is particularly useful in analyzing data with large cluster size where it is difficult to estimate intracluster correlation. Under certain regularity conditions, we show the consistency and asymptotic normality of the proposed QIF estimators. A chi‐squared test is also developed for hypothesis testing. We conduct extensive Monte Carlo simulation studies to assess the finite sample performance of the proposed methods. We also illustrate the proposed methods by analyzing primary biliary cirrhosis (PBC) data.  相似文献   

3.
4.
Kernel smoothing is a popular approach to estimating relative risk surfaces from data on the locations of cases and controls in geographical epidemiology. The interpretation of such surfaces is facilitated by plotting of tolerance contours which highlight areas where the risk is sufficiently high to reject the null hypothesis of unit relative risk. Previously it has been recommended that these tolerance intervals be calculated using Monte Carlo randomization tests. We examine a computationally cheap alternative whereby the tolerance intervals are derived from asymptotic theory. We also examine the performance of global tests of hetereogeneous risk employing statistics based on kernel risk surfaces, paying particular attention to the choice of smoothing parameters on test power (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

5.
Scores for the analysis of distinct failure time data using ranks are modified to deal with tied failure times or grouped data. Approximate techniques for inference for regression parameters are considered for ‘proportional odds’ and ‘proportional hazards’ survival data involving tied failures or grouped data. An illustration is given involving a two sample problem.  相似文献   

6.
Summary In this article, we propose a positive stable shared frailty Cox model for clustered failure time data where the frailty distribution varies with cluster‐level covariates. The proposed model accounts for covariate‐dependent intracluster correlation and permits both conditional and marginal inferences. We obtain marginal inference directly from a marginal model, then use a stratified Cox‐type pseudo‐partial likelihood approach to estimate the regression coefficient for the frailty parameter. The proposed estimators are consistent and asymptotically normal and a consistent estimator of the covariance matrix is provided. Simulation studies show that the proposed estimation procedure is appropriate for practical use with a realistic number of clusters. Finally, we present an application of the proposed method to kidney transplantation data from the Scientific Registry of Transplant Recipients.  相似文献   

7.
Multivariate binary discrimination by the kernel method   总被引:10,自引:0,他引:10  
AITCHISON  J.; AITKEN  C. G. G. 《Biometrika》1976,63(3):413-420
  相似文献   

8.
The paper deals with discrete-time regression models to analyze multistate-multiepisode failure time data. The covariate process may include fixed and external as well as internal time dependent covariates. The effects of the covariates may differ among different kinds of failures and among successive episodes. A dynamic form of the logistic regression model is investigated and maximum likelihood estimation of the regression coefficients is discussed. In the last section we give an application of the model to the analysis of survival time after breast cancer operation.  相似文献   

9.
10.
Quantiles and their estimations are the basis for solving numerous problems of statistics. The paper presents adaptive recursive estimation methods for this statistical parameter. Its specifical properties are investigated and the possibilities of applications in computer assisted analysis of biological signals are demonstrated even satisfying real time requirements.  相似文献   

11.
Summary As biological studies become more expensive to conduct, statistical methods that take advantage of existing auxiliary information about an expensive exposure variable are desirable in practice. Such methods should improve the study efficiency and increase the statistical power for a given number of assays. In this article, we consider an inference procedure for multivariate failure time with auxiliary covariate information. We propose an estimated pseudopartial likelihood estimator under the marginal hazard model framework and develop the asymptotic properties for the proposed estimator. We conduct simulation studies to evaluate the performance of the proposed method in practical situations and demonstrate the proposed method with a data set from the studies of left ventricular dysfunction ( SOLVD Investigators, 1991 , New England Journal of Medicine 325 , 293–302).  相似文献   

12.
Reducing variability of crossvalidation for smoothing-parameter choice   总被引:1,自引:0,他引:1  
One of the attractions of crossvalidation, as a tool for smoothing-parameterchoice, is its applicability to a wide variety of estimatortypes and contexts. However, its detractors comment adverselyon the relatively high variance of crossvalidatory smoothingparameters, noting that this compromises the performance ofthe estimators in which those parameters are used. We show thatthe variability can be reduced simply, significantly and reliablyby employing bootstrap aggregation or bagging. We establishthat in theory, when bagging is implemented using an adaptivelychosen resample size, the variability of crossvalidation canbe reduced by an order of magnitude. However, it is arguablymore attractive to use a simpler approach, based for exampleon half-sample bagging, which can reduce variability by approximately50%.  相似文献   

13.
Summary Quantile regression, which models the conditional quantiles of the response variable given covariates, usually assumes a linear model. However, this kind of linearity is often unrealistic in real life. One situation where linear quantile regression is not appropriate is when the response variable is piecewise linear but still continuous in covariates. To analyze such data, we propose a bent line quantile regression model. We derive its parameter estimates, prove that they are asymptotically valid given the existence of a change‐point, and discuss several methods for testing the existence of a change‐point in bent line quantile regression together with a power comparison by simulation. An example of land mammal maximal running speeds is given to illustrate an application of bent line quantile regression in which this model is theoretically justified and its parameters are of direct biological interests.  相似文献   

14.
15.
16.
Xia  Yingcun 《Biometrika》2009,96(1):133-148
Lack-of-fit checking for parametric and semiparametric modelsis essential in reducing misspecification. The efficiency ofmost existing model-checking methods drops rapidly as the dimensionof the covariates increases. We propose to check a model byprojecting the fitted residuals along a direction that adaptsto the systematic departure of the residuals from the desiredpattern. Consistency of the method is proved for parametricand semiparametric regression models. A bootstrap implementationis also discussed. Simulation comparisons with several existingmethods are made, suggesting that the proposed methods are moreefficient than the existing methods when the dimension increases.Air pollution data from Chicago are used to illustrate the procedure.  相似文献   

17.
Small area estimation with M‐quantile models was proposed by Chambers and Tzavidis ( 2006 ). The key target of this approach to small area estimation is to obtain reliable and outlier robust estimates avoiding at the same time the need for strong parametric assumptions. This approach, however, does not allow for the use of unit level survey weights, making questionable the design consistency of the estimators unless the sampling design is self‐weighting within small areas. In this paper, we adopt a model‐assisted approach and construct design consistent small area estimators that are based on the M‐quantile small area model. Analytic and bootstrap estimators of the design‐based variance are discussed. The proposed estimators are empirically evaluated in the presence of complex sampling designs.  相似文献   

18.
Due to the advent of high-throughput genomic technology, it has become possible to monitor cellular activities on a genomewide basis. With these new methods, scientists can begin to address important biological questions. One such question involves the identification of replication origins, which are regions in the chromosomes where DNA replication is initiated. One hypothesis is that their locations are nonrandom throughout the genome. In this article, we analyze data from a recent yeast study in which candidate replication origins were profiled using cDNA microarrays to test this hypothesis. We find no evidence for such clustering.  相似文献   

19.
We propose a simple and general resampling strategy to estimatevariances for parameter estimators derived from nonsmooth estimatingfunctions. This approach applies to a wide variety of semiparametricand nonparametric problems in biostatistics. It does not requiresolving estimating equations and is thus much faster than theexisting resampling procedures. Its usefulness is illustratedwith heteroscedastic quantile regression and censored data rankregression. Numerical results based on simulated and real dataare provided.  相似文献   

20.
Su X  Fan J 《Biometrics》2004,60(1):93-99
A method of constructing trees for correlated failure times is put forward. It adopts the backfitting idea of classification and regression trees (CART) (Breiman et al., 1984, in Classification and Regression Trees). The tree method is developed based on the maximized likelihoods associated with the gamma frailty model and standard likelihood-related techniques are incorporated. The proposed method is assessed through simulations conducted under a variety of model configurations and illustrated using the chronic granulomatous disease (CGD) study data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号