首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
An estimator of the hazard rate function from discrete failure time data is obtained by semiparametric smoothing of the (nonsmooth) maximum likelihood estimator, which is achieved by repeated multiplication of a Markov chain transition-type matrix. This matrix is constructed so as to have a given standard discrete parametric hazard rate model, termed the vehicle model, as its stationary hazard rate. As with the discrete density estimation case, the proposed estimator gives improved performance when the vehicle model is a good one and otherwise provides a nonparametric method comparable to the only purely nonparametric smoother discussed in the literature. The proposed semiparametric smoothing approach is then extended to hazard models with covariates and is illustrated by applications to simulated and real data sets.  相似文献   

2.
Ross EA  Moore D 《Biometrics》1999,55(3):813-819
We have developed methods for modeling discrete or grouped time, right-censored survival data collected from correlated groups or clusters. We assume that the marginal hazard of failure for individual items within a cluster is specified by a linear log odds survival model and the dependence structure is based on a gamma frailty model. The dependence can be modeled as a function of cluster-level covariates. Likelihood equations for estimating the model parameters are provided. Generalized estimating equations for the marginal hazard regression parameters and pseudolikelihood methods for estimating the dependence parameters are also described. Data from two clinical trials are used for illustration purposes.  相似文献   

3.
Kozumi H 《Biometrics》2000,56(4):1002-1006
This paper considers the discrete survival data from a Bayesian point of view. A sequence of the baseline hazard functions, which plays an important role in the discrete hazard function, is modeled with a hidden Markov chain. It is explained how the resultant model is implemented via Markov chain Monte Carlo methods. The model is illustrated by an application of real data.  相似文献   

4.
D P Byar  N Mantel 《Biometrics》1975,31(4):943-947
Interrelationships among three response-time models which incorporate covariate information are explored. The most general of these models is the logistic-exponential in which the log odds of the probability of responding in a fixed interval is assumed to be a linear function of the covariates; this model includes a parameter W for the width of discrete time intervals in which responses occur. As W leads to O this model is equivalent to a continuous time exponential model in which the log hazard is linear in the covariates. As W leads to infininity it is equivalent to a continuous time exponential model in which the hazard itself is a linear function of the covariates. This second model was fitted to the data used in an earlier publication describing the logistic exponential model, and very close agreement of the estimates of the regression coefficients is demonstrated.  相似文献   

5.
The conventional line transect approach of estimating effective search width from the perpendicular distance distribution is inappropriate in certain types of surveys, e.g., when an unknown fraction of the animals on the track line is detected, the animals can be observed only at discrete points in time, there are errors in positional measurements, and covariate heterogeneity exists in detectability. For such situations a hazard probability framework for independent observer surveys is developed. The likelihood of the data, including observed positions of both initial and subsequent observations of animals, is established under the assumption of no measurement errors. To account for measurement errors and possibly other complexities, this likelihood is modified by a function estimated from extensive simulations. This general method of simulated likelihood is explained and the methodology applied to data from a double-platform survey of minke whales in the northeastern Atlantic in 1995.  相似文献   

6.
This paper discusses discrete time proportional hazard models and suggests a new class of flexible hazard functions. Explicitly modeling the discreteness of data is important since standard continuous models are biased; allowing for flexibility in the hazard estimation is desirable since strong parametric restrictions are likely to be similarly misleading. Simulation compare continuous and discrete models when data are generated by grouping and demonstrate that simple approximations recover underlying hazards well and outperform nonparametric maximum likelihood estimates in term of mean squared error.  相似文献   

7.
Bacchetti P  Quale C 《Biometrics》2002,58(2):443-447
We describe a method for extending smooth nonparametric modeling methods to time-to-event data where the event may be known only to lie within a window of time. Maximum penalized likelihood is used to fit a discrete proportional hazards model that also models the baseline hazard, and left-truncation and time-varying covariates are accommodated. The implementation follows generalized additive modeling conventions, allowing both parametric and smooth terms and specifying the amount of smoothness in terms of the effective degrees of freedom. We illustrate the method on a well-known interval-censored data set on time of human immunodeficiency virus infection in a multicenter study of hemophiliacs. The ability to examine time-varying covariates, not available with previous methods, allows detection and modeling of nonproportional hazards and use of a time-varying covariate that fits the data better and is more plausible than a fixed alternative.  相似文献   

8.
Prior investigations have examined steady-state flow in surface flow treatment wetlands, with mixing modeled as advection-dominated, and reaction calculated using flow-weighted averages over collections of stream tubes with different velocities. This work extends these concepts to non-steady flow conditions and temporally varying inlet concentrations. The essential construct that makes the approach feasible is definition of a set of reference (steady) state conditions under which the residence time distribution (RTD) and stream-tube specific rate constants are defined. Residence time in any stream tube under non-steady flow is treated as a linear function of its reference-condition residence time, and the overall wetland retention time under both mean and varying flow regimes. Outlet concentration is found by convolution of the reaction term with a varying inlet concentration function. For real-world flow and concentration data collected at discrete points in time, integration for outlet concentration is approximated using linear interpolation to generate inlet concentrations and velocities at intermediate points in time. The approach is examined using data from the literature. Vegetation density and depth distributions are seen as central in determining mixing and treatment performance.  相似文献   

9.
Fully Bayesian methods for Cox models specify a model for the baseline hazard function. Parametric approaches generally provide monotone estimations. Semi‐parametric choices allow for more flexible patterns but they can suffer from overfitting and instability. Regularization methods through prior distributions with correlated structures usually give reasonable answers to these types of situations. We discuss Bayesian regularization for Cox survival models defined via flexible baseline hazards specified by a mixture of piecewise constant functions and by a cubic B‐spline function. For those “semi‐parametric” proposals, different prior scenarios ranging from prior independence to particular correlated structures are discussed in a real study with microvirulence data and in an extensive simulation scenario that includes different data sample and time axis partition sizes in order to capture risk variations. The posterior distribution of the parameters was approximated using Markov chain Monte Carlo methods. Model selection was performed in accordance with the deviance information criteria and the log pseudo‐marginal likelihood. The results obtained reveal that, in general, Cox models present great robustness in covariate effects and survival estimates independent of the baseline hazard specification. In relation to the “semi‐parametric” baseline hazard specification, the B‐splines hazard function is less dependent on the regularization process than the piecewise specification because it demands a smaller time axis partition to estimate a similar behavior of the risk.  相似文献   

10.
Summary Many time‐to‐event studies are complicated by the presence of competing risks and by nesting of individuals within a cluster, such as patients in the same center in a multicenter study. Several methods have been proposed for modeling the cumulative incidence function with independent observations. However, when subjects are clustered, one needs to account for the presence of a cluster effect either through frailty modeling of the hazard or subdistribution hazard, or by adjusting for the within‐cluster correlation in a marginal model. We propose a method for modeling the marginal cumulative incidence function directly. We compute leave‐one‐out pseudo‐observations from the cumulative incidence function at several time points. These are used in a generalized estimating equation to model the marginal cumulative incidence curve, and obtain consistent estimates of the model parameters. A sandwich variance estimator is derived to adjust for the within‐cluster correlation. The method is easy to implement using standard software once the pseudovalues are obtained, and is a generalization of several existing models. Simulation studies show that the method works well to adjust the SE for the within‐cluster correlation. We illustrate the method on a dataset looking at outcomes after bone marrow transplantation.  相似文献   

11.
The purpose of many wildlife population studies is to estimate density, movement, or demographic parameters. Linking these parameters to covariates, such as habitat features, provides additional ecological insight and can be used to make predictions for management purposes. Line‐transect surveys, combined with distance sampling methods, are often used to estimate density at discrete points in time, whereas capture–recapture methods are used to estimate movement and other demographic parameters. Recently, open population spatial capture–recapture models have been developed, which simultaneously estimate density and demographic parameters, but have been made available only for data collected from a fixed array of detectors and have not incorporated the effects of habitat covariates. We developed a spatial capture–recapture model that can be applied to line‐transect survey data by modeling detection probability in a manner analogous to distance sampling. We extend this model to a) estimate demographic parameters using an open population framework and b) model variation in density and space use as a function of habitat covariates. The model is illustrated using simulated data and aerial line‐transect survey data for North Atlantic right whales in the southeastern United States, which also demonstrates the ability to integrate data from multiple survey platforms and accommodate differences between strata or demographic groups. When individuals detected from line‐transect surveys can be uniquely identified, our model can be used to simultaneously make inference on factors that influence spatial and temporal variation in density, movement, and population dynamics.  相似文献   

12.
Du P  Jiang Y  Wang Y 《Biometrics》2011,67(4):1330-1339
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data.  相似文献   

13.
基于秦岭样区的四种时序EVI函数拟合方法对比研究   总被引:3,自引:0,他引:3  
刘亚南  肖飞  杜耘 《生态学报》2016,36(15):4672-4679
函数曲线拟合方法是植被指数时间序列重建的一个重要方法,已经广泛应用于森林面积动态变化监测、农作物估产、遥感物候信息提取、生态系统碳循环研究等领域。基于秦岭样区多年MODIS EVI遥感数据及其质量控制数据,探讨并改进了时序EVI重建过程中噪声点优化和对原始高质量数据保真能力的评价方法;在此基础上,比较了常用的非对称性高斯函数拟合法(AG)、双Logistic函数拟合法(DL)和单Logistic函数拟合法(SL)。基于SL方法,调整了模型形式并重新定义d的参数意义,提出了最值优化单Logistic函数拟合法(MSL),并与其他3种方法进行对比。结果表明;在噪声点优化及保留原始高质量数据方面,AG方法和DL方法二者整体差别不大,而在部分像元的处理上AG方法表现出更好的拟合效果;MSL方法和SL方法相比于AG方法和DL方法其效果更为突出;在地形气候复杂,植被指数噪声较多的山区,MSL方法表现出更好的适用性。  相似文献   

14.
A continuous time discrete state cumulative damage process {X(t), t ≥ 0} is considered, based on a non‐homogeneous Poisson hit‐count process and discrete distribution of damage per hit, which can be negative binomial, Neyman type A, Polya‐Aeppli or Lagrangian Poisson. Intensity functions considered for the Poisson process comprise a flexible three‐parameter family. The survival function is S(t) = P(X(t) ≤ L) where L is fixed. Individual variation is accounted for within the construction for the initial damage distribution {P(X(0) = x) | x = 0, 1, …,}. This distribution has an essential cut‐off before x = L and the distribution of LX(0) may be considered a tolerance distribution. A multivariate extension appropriate for the randomized complete block design is developed by constructing dependence in the initial damage distributions. Our multivariate model is applied (via maximum likelihood) to litter‐matched tumorigenesis data for rats. The litter effect accounts for 5.9 percent of the variance of the individual effect. Cumulative damage hazard functions are compared to nonparametric hazard functions and to hazard functions obtained from the PVF‐Weibull frailty model. The cumulative damage model has greater dimensionality for interpretation compared to other models, owing principally to the intensity function part of the model.  相似文献   

15.
Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models.  相似文献   

16.
A number of imprinted genes have been observed in plants, animals and humans. They not only control growth and developmental traits, but may also be responsible for survival traits. Based on the Cox proportional hazards (PH) model, we constructed a general parametric model for dissecting genomic imprinting, in which a baseline hazard function is selectable for fitting the effects of imprinted quantitative trait loci (iQTL) genotypes on the survival curve. The expectation–maximisation (EM) algorithm is derived for solving the maximum likelihood estimates of iQTL parameters. The imprinting patterns of the detected iQTL are statistically tested under a series of null hypotheses. The Bayesian information criterion (BIC) model selection criterion is employed to choose an optimal baseline hazard function with maximum likelihood and parsimonious parameterisation. We applied the proposed approach to analyse the published data in an F2 population of mice and concluded that, among five commonly used survival distributions, the log-logistic distribution is the optimal baseline hazard function for the survival time of hyperoxic acute lung injury (HALI). Under this optimal model, five QTL were detected, among which four are imprinted in different imprinting patterns.  相似文献   

17.
18.
Cook RJ 《Biometrics》1999,55(3):915-920
Many chronic medical conditions can be meaningfully characterized in terms of a two-state stochastic process. Here we consider the problem in which subjects make transitions among two such states in continuous time but are only observed at discrete, irregularly spaced time points that are possibly unique to each subject. Data arising from such an observation scheme are called panel data, and methods for related analyses are typically based on Markov assumptions. The purpose of this article is to present a conditionally Markov model that accommodates subject-to-subject variation in the model parameters by the introduction of random effects. We focus on a particular random effects formulation that generates a closed-form expression for the marginal likelihood. The methodology is illustrated by application to a data set from a parasitic field infection survey.  相似文献   

19.
A method for fitting parametric models to apparently complex hazard rates in survival data is suggested. Hazard complexity may indicate competing causes of failure. A competing risks model is constructed on the assumption that a failure time can be considered as the first passage time of possibly several latent, stochastic processes competing in reaching a barrier. An additional assumption of independence between the hidden processes leads directly to a composite hazard function as the sum of the cause specific hazards. We show how this composite hazard model based on Wiener processes can serve as a flexible tool for modelling complex hazards by varying the number of processes and their starting conditions. An example with real data is presented. Parameter estimation and model assessment are based on Markov Chain Monte Carlo methods. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

20.
The aim of the paper is to develop a procedure for an estimate of an analytical form of a hazard function for cancer patients. Although a deterministic approach based on cancer cell population dynamics yields the analytical expression, it depends on several parameters which should be estimated. On the other hand, a kernel estimate is an effective nonparametric method for estimating hazard functions. This method provides the pointwise estimate of the hazard function. Our procedure consists of two steps: in the first step we find the kernel estimate of the hazard function and in the second step the parameters in the deterministic model are obtained by the least squares method. A simulation study with different types of censorship is carried out and the developed procedure is applied to real data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号