首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The intraclass correlation is commonly used with clustered data. It is often estimated based on fitting a model to hierarchical data and it leads, in turn, to several concepts such as reliability, heritability, inter‐rater agreement, etc. For data where linear models can be used, such measures can be defined as ratios of variance components. Matters are more difficult for non‐Gaussian outcomes. The focus here is on count and time‐to‐event outcomes where so‐called combined models are used, extending generalized linear mixed models, to describe the data. These models combine normal and gamma random effects to allow for both correlation due to data hierarchies as well as for overdispersion. Furthermore, because the models admit closed‐form expressions for the means, variances, higher moments, and even the joint marginal distribution, it is demonstrated that closed forms of intraclass correlations exist. The proposed methodology is illustrated using data from agricultural and livestock studies.  相似文献   

2.
Recurrent event data arise in longitudinal follow‐up studies, where each subject may experience the same type of events repeatedly. The work in this article is motivated by the data from a study of repeated peritonitis for patients on peritoneal dialysis. Due to the aspects of medicine and cost, the peritonitis cases were classified into two types: Gram‐positive and non‐Gram‐positive peritonitis. Further, since the death and hemodialysis therapy preclude the occurrence of recurrent events, we face multivariate recurrent event data with a dependent terminal event. We propose a flexible marginal model, which has three characteristics: first, we assume marginal proportional hazard and proportional rates models for terminal event time and recurrent event processes, respectively; second, the inter‐recurrences dependence and the correlation between the multivariate recurrent event processes and terminal event time are modeled through three multiplicative frailties corresponding to the specified marginal models; third, the rate model with frailties for recurrent events is specified only on the time before the terminal event. We propose a two‐stage estimation procedure for estimating unknown parameters. We also establish the consistency of the two‐stage estimator. Simulation studies show that the proposed approach is appropriate for practical use. The methodology is applied to the peritonitis cohort data that motivated this study.  相似文献   

3.
4.
In clinical trials with time‐to‐event outcomes, it is of interest to predict when a prespecified number of events can be reached. Interim analysis is conducted to estimate the underlying survival function. When another correlated time‐to‐event endpoint is available, both outcome variables can be used to improve estimation efficiency. In this paper, we propose to use the convolution of two time‐to‐event variables to estimate the survival function of interest. Propositions and examples are provided based on exponential models that accommodate possible change points. We further propose a new estimation equation about the expected time that exploits the relationship of two endpoints. Simulations and the analysis of real data show that the proposed methods with bivariate information yield significant improvement in prediction over that of the univariate method.  相似文献   

5.
A common problem with observational datasets is that not all events of interest may be detected. For example, observing animals in the wild can difficult when animals move, hide, or cannot be closely approached. We consider time series of events recorded in conditions where events are occasionally missed by observers or observational devices. These time series are not restricted to behavioral protocols, but can be any cyclic or recurring process where discrete outcomes are observed. Undetected events cause biased inferences on the process of interest, and statistical analyses are needed that can identify and correct the compromised detection processes. Missed observations in time series lead to observed time intervals between events at multiples of the true inter‐event time, which conveys information on their detection probability. We derive the theoretical probability density function for observed intervals between events that includes a probability of missed detection. Methodology and software tools are provided for analysis of event data with potential observation bias and its removal. The methodology was applied to simulation data and a case study of defecation rate estimation in geese, which is commonly used to estimate their digestive throughput and energetic uptake, or to calculate goose usage of a feeding site from dropping density. Simulations indicate that at a moderate chance to miss arrival events (p = 0.3), uncorrected arrival intervals were biased upward by up to a factor 3, while parameter values corrected for missed observations were within 1% of their true simulated value. A field case study shows that not accounting for missed observations leads to substantial underestimates of the true defecation rate in geese, and spurious rate differences between sites, which are introduced by differences in observational conditions. These results show that the derived methodology can be used to effectively remove observational biases in time‐ordered event data.  相似文献   

6.
When analyzing clinical trials with a stratified population, homogeneity of treatment effects is a common assumption in survival analysis. However, in the context of recent developments in clinical trial design, which aim to test multiple targeted therapies in corresponding subpopulations simultaneously, the assumption that there is no treatment‐by‐stratum interaction seems inappropriate. It becomes an issue if the expected sample size of the strata makes it unfeasible to analyze the trial arms individually. Alternatively, one might choose as primary aim to prove efficacy of the overall (targeted) treatment strategy. When testing for the overall treatment effect, a violation of the no‐interaction assumption renders it necessary to deviate from standard methods that rely on this assumption. We investigate the performance of different methods for sample size calculation and data analysis under heterogeneous treatment effects. The commonly used sample size formula by Schoenfeld is compared to another formula by Lachin and Foulkes, and to an extension of Schoenfeld's formula allowing for stratification. Beyond the widely used (stratified) Cox model, we explore the lognormal shared frailty model, and a two‐step analysis approach as potential alternatives that attempt to adjust for interstrata heterogeneity. We carry out a simulation study for a trial with three strata and violations of the no‐interaction assumption. The extension of Schoenfeld's formula to heterogeneous strata effects provides the most reliable sample size with respect to desired versus actual power. The two‐step analysis and frailty model prove to be more robust against loss of power caused by heterogeneous treatment effects than the stratified Cox model and should be preferred in such situations.  相似文献   

7.
8.
9.
10.
11.
The use of control charts for monitoring schemes in medical context should consider adjustments to incorporate the specific risk for each individual. Some authors propose the use of a risk‐adjusted survival time cumulative sum (RAST CUSUM) control chart to monitor a time‐to‐event outcome, possibly right censored, using conventional survival models, which do not contemplate the possibility of cure of a patient. We propose to extend this approach considering a risk‐adjusted CUSUM chart, based on a cure rate model. We consider a regression model in which the covariates affect the cure fraction. The CUSUM scores are obtained for Weibull and log‐logistic promotion time model to monitor a scale parameter for nonimmune individuals. A simulation study was conducted to evaluate and compare the performance of the proposed chart (RACUF CUSUM) with RAST CUSUM, based on optimal control limits and average run length in different situations. As a result, we note that the RAST CUSUM chart is inappropriate when applied to data with a cure rate, while the proposed RACUF CUSUM chart seems to have similar performance if applied to data without a cure rate. The proposed chart is illustrated with simulated data and with a real data set of patients with heart failure treated at the Heart Institute (InCor), at the University of São Paulo, Brazil.  相似文献   

12.
13.
14.
Interval‐censored recurrent event data arise when the event of interest is not readily observed but the cumulative event count can be recorded at periodic assessment times. In some settings, chronic disease processes may resolve, and individuals will cease to be at risk of events at the time of disease resolution. We develop an expectation‐maximization algorithm for fitting a dynamic mover‐stayer model to interval‐censored recurrent event data under a Markov model with a piecewise‐constant baseline rate function given a latent process. The model is motivated by settings in which the event times and the resolution time of the disease process are unobserved. The likelihood and algorithm are shown to yield estimators with small empirical bias in simulation studies. Data are analyzed on the cumulative number of damaged joints in patients with psoriatic arthritis where individuals experience disease remission.  相似文献   

15.
Joint modeling of various longitudinal sequences has received quite a bit of attention in recent times. This paper proposes a so‐called marginalized joint model for longitudinal continuous and repeated time‐to‐event outcomes on the one hand and a marginalized joint model for bivariate repeated time‐to‐event outcomes on the other. The model has several appealing features. It flexibly allows for association among measurements of the same outcome at different occasions as well as among measurements on different outcomes recorded at the same time. The model also accommodates overdispersion. The time‐to‐event outcomes are allowed to be censored. While the model builds upon the generalized linear mixed model framework, it is such that model parameters enjoy a direct marginal interpretation. All of these features have been considered before, but here we bring them together in a unified, flexible framework. The model framework's properties are scrutinized using a simulation study. The models are applied to data from a chronic heart failure study and to a so‐called comet assay, encountered in preclinical research. Almost surprisingly, the models can be fitted relatively easily using standard statistical software.  相似文献   

16.
To optimize resources, randomized clinical trials with multiple arms can be an attractive option to simultaneously test various treatment regimens in pharmaceutical drug development. The motivation for this work was the successful conduct and positive final outcome of a three‐arm randomized clinical trial primarily assessing whether obinutuzumab plus chlorambucil in patients with chronic lympocytic lymphoma and coexisting conditions is superior to chlorambucil alone based on a time‐to‐event endpoint. The inference strategy of this trial was based on a closed testing procedure. We compare this strategy to three potential alternatives to run a three‐arm clinical trial with a time‐to‐event endpoint. The primary goal is to quantify the differences between these strategies in terms of the time it takes until the first analysis and thus potential approval of a new drug, number of required events, and power. Operational aspects of implementing the various strategies are discussed. In conclusion, using a closed testing procedure results in the shortest time to the first analysis with a minimal loss in power. Therefore, closed testing procedures should be part of the statistician's standard clinical trials toolbox when planning multiarm clinical trials.  相似文献   

17.
18.
The impact of crossing (‘stacking’) genetically modified (GM) events on maize‐grain biochemical composition was compared with the impact of generating nonGM hybrids. The compositional similarity of seven GM stacks containing event DAS‐Ø15Ø7‐1, and their matched nonGM near‐isogenic hybrids (iso‐hybrids) was compared with the compositional similarity of concurrently grown nonGM hybrids and these same iso‐hybrids. Scatter plots were used to visualize comparisons among hybrids and a coefficient of identity (per cent of variation explained by line of identity) was calculated to quantify the relationships within analyte profiles. The composition of GM breeding stacks was more similar to the composition of iso‐hybrids than was the composition of nonGM hybrids. NonGM breeding more strongly influenced crop composition than did transgenesis or stacking of GM events. These findings call into question the value of uniquely requiring composition studies for GM crops, especially for breeding stacks composed of GM events previously found to be compositionally normal.  相似文献   

19.
Deviations from typical environmental conditions can provide insight into how organisms may respond to future weather extremes predicted by climate modeling. During an episodic and multimonth heat wave event (i.e., ambient temperature up to 43.4°C), we studied the thermal ecology of a ground‐dwelling bird species in Western Oklahoma, USA. Specifically, we measured black bulb temperature (Tbb) and vegetation parameters at northern bobwhite (Colinus virginianus; hereafter bobwhite) adult and brood locations as well as at stratified random points in the study area. On the hottest days (i.e., ≥39°C), adults and broods obtained thermal refuge using tall woody cover that remained on average up to 16.51°C cooler than random sites on the landscape which reached >57°C. We also found that refuge sites used by bobwhites moderated thermal conditions by more than twofold compared to stratified random sites on the landscape but that Tbb commonly exceeded thermal stress thresholds for bobwhites (39°C) for several hours of the day within thermal refuges. The serendipitous high heat conditions captured in our study represent extreme heat for our study region as well as thermal stress for our study species, and subsequently allowed us to assess ground‐dwelling bird responses to temperatures that are predicted to become more common in the future. Our findings confirm the critical importance of tall woody cover for moderating temperatures and functioning as important islands of thermal refuge for ground‐dwelling birds, especially during extreme heat. However, the potential for extreme heat loads within thermal refuges that we observed (albeit much less extreme than the landscape) indicates that the functionality of tall woody cover to mitigate heat extremes may be increasingly limited in the future, thereby reinforcing predictions that climate change represents a clear and present danger for these species.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号