首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Statistical procedures and methodology for assessment of interventions or treatments based on medical data often involves complexities due to incompleteness of the available data as a result of drop out or the inability of complete follow up until the endpoint of interest. In this article we propose a nonparametric regression model based on censored data when we are concerned with investigation of the simultaneous effects of the two or more factors. Specifically, we will assess the effect of a treatment (dose) and a covariate (e.g., age categories) on the mean survival time of subjects assigned to combinations of the levels of these factors. The proposed method allows for varying levels of censorship in the outcome among different groups of subjects at different levels of the independent variables (factors). We derive the asymptotic distribution of the estimators of the parameters in our model, which then allows for statistical inference. Finally, through a simulation study we assess the effect of the censoring rates on the standard error of these types of estimators. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

2.
Recurrent events data are commonly encountered in medical studies. In many applications, only the number of events during the follow‐up period rather than the recurrent event times is available. Two important challenges arise in such studies: (a) a substantial portion of subjects may not experience the event, and (b) we may not observe the event count for the entire study period due to informative dropout. To address the first challenge, we assume that underlying population consists of two subpopulations: a subpopulation nonsusceptible to the event of interest and a subpopulation susceptible to the event of interest. In the susceptible subpopulation, the event count is assumed to follow a Poisson distribution given the follow‐up time and the subject‐specific characteristics. We then introduce a frailty to account for informative dropout. The proposed semiparametric frailty models consist of three submodels: (a) a logistic regression model for the probability such that a subject belongs to the nonsusceptible subpopulation; (b) a nonhomogeneous Poisson process model with an unspecified baseline rate function; and (c) a Cox model for the informative dropout time. We develop likelihood‐based estimation and inference procedures. The maximum likelihood estimators are shown to be consistent. Additionally, the proposed estimators of the finite‐dimensional parameters are asymptotically normal and the covariance matrix attains the semiparametric efficiency bound. Simulation studies demonstrate that the proposed methodologies perform well in practical situations. We apply the proposed methods to a clinical trial on patients with myelodysplastic syndromes.  相似文献   

3.
Semiparametric smoothing methods are usually used to model longitudinal data, and the interest is to improve efficiency for regression coefficients. This paper is concerned with the estimation in semiparametric varying‐coefficient models (SVCMs) for longitudinal data. By the orthogonal projection method, local linear technique, quasi‐score estimation, and quasi‐maximum likelihood estimation, we propose a two‐stage orthogonality‐based method to estimate parameter vector, coefficient function vector, and covariance function. The developed procedures can be implemented separately and the resulting estimators do not affect each other. Under some mild conditions, asymptotic properties of the resulting estimators are established explicitly. In particular, the asymptotic behavior of the estimator of coefficient function vector at the boundaries is examined. Further, the finite sample performance of the proposed procedures is assessed by Monte Carlo simulation experiments. Finally, the proposed methodology is illustrated with an analysis of an acquired immune deficiency syndrome (AIDS) dataset.  相似文献   

4.
In follow‐up studies, the disease event time can be subject to left truncation and right censoring. Furthermore, medical advancements have made it possible for patients to be cured of certain types of diseases. In this article, we consider a semiparametric mixture cure model for the regression analysis of left‐truncated and right‐censored data. The model combines a logistic regression for the probability of event occurrence with the class of transformation models for the time of occurrence. We investigate two techniques for estimating model parameters. The first approach is based on martingale estimating equations (EEs). The second approach is based on the conditional likelihood function given truncation variables. The asymptotic properties of both proposed estimators are established. Simulation studies indicate that the conditional maximum‐likelihood estimator (cMLE) performs well while the estimator based on EEs is very unstable even though it is shown to be consistent. This is a special and intriguing phenomenon for the EE approach under cure model. We provide insights into this issue and find that the EE approach can be improved significantly by assigning appropriate weights to the censored observations in the EEs. This finding is useful in overcoming the instability of the EE approach in some more complicated situations, where the likelihood approach is not feasible. We illustrate the proposed estimation procedures by analyzing the age at onset of the occiput‐wall distance event for patients with ankylosing spondylitis.  相似文献   

5.
Clinical trials with Poisson distributed count data as the primary outcome are common in various medical areas such as relapse counts in multiple sclerosis trials or the number of attacks in trials for the treatment of migraine. In this article, we present approximate sample size formulae for testing noninferiority using asymptotic tests which are based on restricted or unrestricted maximum likelihood estimators of the Poisson rates. The Poisson outcomes are allowed to be observed for unequal follow‐up schemes, and both the situations that the noninferiority margin is expressed in terms of the difference and the ratio are considered. The exact type I error rates and powers of these tests are evaluated and the accuracy of the approximate sample size formulae is examined. The test statistic using the restricted maximum likelihood estimators (for the difference test problem) and the test statistic that is based on the logarithmic transformation and employs the maximum likelihood estimators (for the ratio test problem) show favorable type I error control and can be recommended for practical application. The approximate sample size formulae show high accuracy even for small sample sizes and provide power values identical or close to the aspired ones. The methods are illustrated by a clinical trial example from anesthesia.  相似文献   

6.
The proportion ratio (PR) of responses between an experimental treatment and a control treatment is one of the most commonly used indices to measure the relative treatment effect in a randomized clinical trial. We develop asymptotic and permutation‐based procedures for testing equality of treatment effects as well as derive confidence intervals of PRs for multivariate binary matched‐pair data under a mixed‐effects exponential risk model. To evaluate and compare the performance of these test procedures and interval estimators, we employ Monte Carlo simulation. When the number of matched pairs is large, we find that all test procedures presented here can perform well with respect to Type I error. When the number of matched pairs is small, the permutation‐based test procedures developed in this paper is of use. Furthermore, using test procedures (or interval estimators) based on a weighted linear average estimator of treatment effects can improve power (or gain precision) when the treatment effects on all response variables of interest are known to fall in the same direction. Finally, we apply the data taken from a crossover clinical trial that monitored several adverse events of an antidepressive drug to illustrate the practical use of test procedures and interval estimators considered here.  相似文献   

7.
Malka Gorfine  Li Hsu 《Biometrics》2011,67(2):415-426
Summary In this work, we provide a new class of frailty‐based competing risks models for clustered failure times data. This class is based on expanding the competing risks model of Prentice et al. (1978, Biometrics 34 , 541–554) to incorporate frailty variates, with the use of cause‐specific proportional hazards frailty models for all the causes. Parametric and nonparametric maximum likelihood estimators are proposed. The main advantages of the proposed class of models, in contrast to the existing models, are: (1) the inclusion of covariates; (2) the flexible structure of the dependency among the various types of failure times within a cluster; and (3) the unspecified within‐subject dependency structure. The proposed estimation procedures produce the most efficient parametric and semiparametric estimators and are easy to implement. Simulation studies show that the proposed methods perform very well in practical situations.  相似文献   

8.
In cohort studies the outcome is often time to a particular event, and subjects are followed at regular intervals. Periodic visits may also monitor a secondary irreversible event influencing the event of primary interest, and a significant proportion of subjects develop the secondary event over the period of follow‐up. The status of the secondary event serves as a time‐varying covariate, but is recorded only at the times of the scheduled visits, generating incomplete time‐varying covariates. While information on a typical time‐varying covariate is missing for entire follow‐up period except the visiting times, the status of the secondary event are unavailable only between visits where the status has changed, thus interval‐censored. One may view interval‐censored covariate of the secondary event status as missing time‐varying covariates, yet missingness is partial since partial information is provided throughout the follow‐up period. Current practice of using the latest observed status produces biased estimators, and the existing missing covariate techniques cannot accommodate the special feature of missingness due to interval censoring. To handle interval‐censored covariates in the Cox proportional hazards model, we propose an available‐data estimator, a doubly robust‐type estimator as well as the maximum likelihood estimator via EM algorithm and present their asymptotic properties. We also present practical approaches that are valid. We demonstrate the proposed methods using our motivating example from the Northern Manhattan Study.  相似文献   

9.
The ability to accurately estimate the sample size required by a stepped‐wedge (SW) cluster randomized trial (CRT) routinely depends upon the specification of several nuisance parameters. If these parameters are misspecified, the trial could be overpowered, leading to increased cost, or underpowered, enhancing the likelihood of a false negative. We address this issue here for cross‐sectional SW‐CRTs, analyzed with a particular linear‐mixed model, by proposing methods for blinded and unblinded sample size reestimation (SSRE). First, blinded estimators for the variance parameters of a SW‐CRT analyzed using the Hussey and Hughes model are derived. Following this, procedures for blinded and unblinded SSRE after any time period in a SW‐CRT are detailed. The performance of these procedures is then examined and contrasted using two example trial design scenarios. We find that if the two key variance parameters were underspecified by 50%, the SSRE procedures were able to increase power over the conventional SW‐CRT design by up to 41%, resulting in an empirical power above the desired level. Thus, though there are practical issues to consider, the performance of the procedures means researchers should consider incorporating SSRE in to future SW‐CRTs.  相似文献   

10.
Count data sets are traditionally analyzed using the ordinary Poisson distribution. However, such a model has its applicability limited as it can be somewhat restrictive to handle specific data structures. In this case, it arises the need for obtaining alternative models that accommodate, for example, (a) zero‐modification (inflation or deflation at the frequency of zeros), (b) overdispersion, and (c) individual heterogeneity arising from clustering or repeated (correlated) measurements made on the same subject. Cases (a)–(b) and (b)–(c) are often treated together in the statistical literature with several practical applications, but models supporting all at once are less common. Hence, this paper's primary goal was to jointly address these issues by deriving a mixed‐effects regression model based on the hurdle version of the Poisson–Lindley distribution. In this framework, the zero‐modification is incorporated by assuming that a binary probability model determines which outcomes are zero‐valued, and a zero‐truncated process is responsible for generating positive observations. Approximate posterior inferences for the model parameters were obtained from a fully Bayesian approach based on the Adaptive Metropolis algorithm. Intensive Monte Carlo simulation studies were performed to assess the empirical properties of the Bayesian estimators. The proposed model was considered for the analysis of a real data set, and its competitiveness regarding some well‐established mixed‐effects models for count data was evaluated. A sensitivity analysis to detect observations that may impact parameter estimates was performed based on standard divergence measures. The Bayesian ‐value and the randomized quantile residuals were considered for model diagnostics.  相似文献   

11.
Small area estimation with M‐quantile models was proposed by Chambers and Tzavidis ( 2006 ). The key target of this approach to small area estimation is to obtain reliable and outlier robust estimates avoiding at the same time the need for strong parametric assumptions. This approach, however, does not allow for the use of unit level survey weights, making questionable the design consistency of the estimators unless the sampling design is self‐weighting within small areas. In this paper, we adopt a model‐assisted approach and construct design consistent small area estimators that are based on the M‐quantile small area model. Analytic and bootstrap estimators of the design‐based variance are discussed. The proposed estimators are empirically evaluated in the presence of complex sampling designs.  相似文献   

12.
Currently, among multiple comparison procedures for dependent groups, a bootstrap‐t with a 20% trimmed mean performs relatively well in terms of both Type I error probabilities and power. However, trimmed means suffer from two general concerns described in the paper. Robust M‐estimators address these concerns, but now no method has been found that gives good control over the probability of a Type I error when sample sizes are small. The paper suggests using instead a modified one‐step M‐estimator that retains the advantages of both trimmed means and robust M‐estimators. Yet another concern is that the more successful methods for trimmed means can be too conservative in terms of Type I errors. Two methods for performing all pairwise multiple comparisons are considered. In simulations, both methods avoid a familywise error (FWE) rate larger than the nominal level. The method based on comparing measures of location associated with the marginal distributions can have an actual FWE that is well below the nominal level when variables are highly correlated. However, the method based on difference scores performs reasonably well with very small sample sizes, and it generally performs better than any of the methods studied in Wilcox (1997b).  相似文献   

13.
This paper discusses two sample nonparametric comparison of survival functions when only interval‐censored failure time data are available. The problem considered often occurs in, for example, biological and medical studies such as medical follow‐up studies and clinical trials. For the problem, we present and study several nonparametric test procedures that include methods based on both absolute and squared survival differences as well as simple survival differences. The presented tests provide alternatives to existing methods, most of which are rank‐based tests and not sensitive to nonproportional or nonmonotone alternatives. Simulation studies are performed to evaluate and compare the proposed methods with existing methods and suggest that the proposed tests work well for nonmonotone alternatives as well as monotone alternatives. An illustrative example is presented.  相似文献   

14.
Quantiles, especially the medians, of survival times are often used as summary statistics to compare the survival experiences between different groups. Quantiles are robust against outliers and preferred over the mean. Multivariate failure time data often arise in biomedical research. For example, in clinical trials, each patient in the study may experience multiple events which may be of the same type or distinct types, while in family studies of genetic diseases or litter matched mice studies, failure times for subjects in the same cluster may be correlated. In this article, we propose nonparametric procedures for the estimation of quantiles with multivariate failure time data. We show that the proposed estimators asymptotically follow a multivariate normal distribution. The asymptotic variance‐covariance matrix of the estimated quantiles is estimated based on the kernel smoothing and bootstrap techniques. Simulation results show that the proposed estimators perform well in finite samples. The methods are illustrated with the burn‐wound infection data and the Diabetic Retinopathy Study (DRS) data.  相似文献   

15.
The aim of this study is to determine how stakeholder engagement can be adapted for the conduct of COVID‐19‐related clinical trials in sub‐Saharan Africa. Nine essential stakeholder engagement practices were reviewed: formative research; stakeholder engagement plan; communications and issues management plan; protocol development; informed consent process; standard of prevention for vaccine research and standard of care for treatment research; policies on trial‐related physical, psychological, financial, and/or social harms; trial accrual, follow‐up, exit trial closure and results dissemination; and post‐trial access to trial products or procedures. The norms, values, and practices of collectivist societies in Sub‐Saharan Africa and the low research literacy pose challenges to the conduct of clinical trials. Civil‐society organizations, members of community advisory boards and ethics committees, young persons, COVID‐19 survivors, researchers, government, and the private sector are assets for the implementation and translation of COVID‐19 related clinical trials. Adapting ethics guidelines to the socio‐cultural context of the region can facilitate achieving the aim of stakeholder engagement.  相似文献   

16.
In many clinical trials, the primary endpoint is time to an event of interest, for example, time to cardiac attack or tumor progression, and the statistical power of these trials is primarily driven by the number of events observed during the trials. In such trials, the number of events observed is impacted not only by the number of subjects enrolled but also by other factors including the event rate and the follow‐up duration. Consequently, it is important for investigators to be able to monitor and predict accurately patient accrual and event times so as to predict the times of interim and final analyses and enable efficient allocation of research resources, which have long been recognized as important aspects of trial design and conduct. The existing methods for prediction of event times all assume that patient accrual follows a Poisson process with a constant Poisson rate over time; however, it is fairly common in real‐life clinical trials that the Poisson rate changes over time. In this paper, we propose a Bayesian joint modeling approach for monitoring and prediction of accrual and event times in clinical trials. We employ a nonhomogeneous Poisson process to model patient accrual and a parametric or nonparametric model for the event and loss to follow‐up processes. Compared to existing methods, our proposed methods are more flexible and robust in that we model accrual and event/loss‐to‐follow‐up times jointly and allow the underlying accrual rates to change over time. We evaluate the performance of the proposed methods through simulation studies and illustrate the methods using data from a real oncology trial.  相似文献   

17.
Summary Recently meta‐analysis has been widely utilized to combine information across multiple studies to evaluate a common effect. Integrating data from similar studies is particularly useful in genomic studies where the individual study sample sizes are not large relative to the number of parameters of interest. In this article, we are interested in developing robust prognostic rules for the prediction of t ‐year survival based on multiple studies. We propose to construct a composite score for prediction by fitting a stratified semiparametric transformation model that allows the studies to have related but not identical outcomes. To evaluate the accuracy of the resulting score, we provide point and interval estimators for the commonly used accuracy measures including the time‐specific receiver operating characteristic curves, and positive and negative predictive values. We apply the proposed procedures to develop prognostic rules for the 5‐year survival of breast cancer patients based on five breast cancer genomic studies.  相似文献   

18.
Joint modeling of recurrent events and a terminal event has been studied extensively in the past decade. However, most of the previous works assumed constant regression coefficients. This paper proposes a joint model with time‐varying coefficients in both event components. The proposed model not only accommodates the correlation between the two type of events, but also characterizes the potential time‐varying covariate effects. It is especially useful for evaluating long‐term risk factors' effect that could vary with time. A Gaussian frailty is used to model the correlation between event times. The nonparametric time‐varying coefficients are modeled using cubic splines with penalty terms. A simulation study shows that the proposed estimators perform well. The model is used to analyze the readmission rate and mortality jointly for stroke patients admitted to Veterans Administration (VA) Hospitals.  相似文献   

19.
The meta‐analysis of diagnostic accuracy studies is often of interest in screening programs for many diseases. The typical summary statistics for studies chosen for a diagnostic accuracy meta‐analysis are often two dimensional: sensitivities and specificities. The common statistical analysis approach for the meta‐analysis of diagnostic studies is based on the bivariate generalized linear‐mixed model (BGLMM), which has study‐specific interpretations. In this article, we present a population‐averaged (PA) model using generalized estimating equations (GEE) for making inference on mean specificity and sensitivity of a diagnostic test in the population represented by the meta‐analytic studies. We also derive the marginalized counterparts of the regression parameters from the BGLMM. We illustrate the proposed PA approach through two dataset examples and compare performance of estimators of the marginal regression parameters from the PA model with those of the marginalized regression parameters from the BGLMM through Monte Carlo simulation studies. Overall, both marginalized BGLMM and GEE with sandwich standard errors maintained nominal 95% confidence interval coverage levels for mean specificity and mean sensitivity in meta‐analysis of 25 of more studies even under misspecification of the covariance structure of the bivariate positive test counts for diseased and nondiseased subjects.  相似文献   

20.
The problem of combining information from separate trials is a key consideration when performing a meta‐analysis or planning a multicentre trial. Although there is a considerable journal literature on meta‐analysis based on individual patient data (IPD), i.e. a one‐step IPD meta‐analysis, versus analysis based on summary data, i.e. a two‐step IPD meta‐analysis, recent articles in the medical literature indicate that there is still confusion and uncertainty as to the validity of an analysis based on aggregate data. In this study, we address one of the central statistical issues by considering the estimation of a linear function of the mean, based on linear models for summary data and for IPD. The summary data from a trial is assumed to comprise the best linear unbiased estimator, or maximum likelihood estimator of the parameter, along with its covariance matrix. The setup, which allows for the presence of random effects and covariates in the model, is quite general and includes many of the commonly employed models, for example, linear models with fixed treatment effects and fixed or random trial effects. For this general model, we derive a condition under which the one‐step and two‐step IPD meta‐analysis estimators coincide, extending earlier work considerably. The implications of this result for the specific models mentioned above are illustrated in detail, both theoretically and in terms of two real data sets, and the roles of balance and heterogeneity are highlighted. Our analysis also shows that when covariates are present, which is typically the case, the two estimators coincide only under extra simplifying assumptions, which are somewhat unrealistic in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号