首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
In telemetry studies, premature tag failure causes negative bias in fish survival estimates because tag failure is interpreted as fish mortality. We used mark-recapture modeling to adjust estimates of fish survival for a previous study where premature tag failure was documented. High rates of tag failure occurred during the Vernalis Adaptive Management Plan’s (VAMP) 2008 study to estimate survival of fall-run Chinook salmon (Oncorhynchus tshawytscha) during migration through the San Joaquin River and Sacramento-San Joaquin Delta, California. Due to a high rate of tag failure, the observed travel time distribution was likely negatively biased, resulting in an underestimate of tag survival probability in this study. Consequently, the bias-adjustment method resulted in only a small increase in estimated fish survival when the observed travel time distribution was used to estimate the probability of tag survival. Since the bias-adjustment failed to remove bias, we used historical travel time data and conducted a sensitivity analysis to examine how fish survival might have varied across a range of tag survival probabilities. Our analysis suggested that fish survival estimates were low (95% confidence bounds range from 0.052 to 0.227) over a wide range of plausible tag survival probabilities (0.48–1.00), and this finding is consistent with other studies in this system. When tags fail at a high rate, available methods to adjust for the bias may perform poorly. Our example highlights the importance of evaluating the tag life assumption during survival studies, and presents a simple framework for evaluating adjusted survival estimates when auxiliary travel time data are available.  相似文献   

2.
The accelerated failure time regression model is most commonly used with right-censored survival data. This report studies the use of a Weibull-based accelerated failure time regression model when left- and interval-censored data are also observed. Two alternative methods of analysis are considered. First, the maximum likelihood estimates (MLEs) for the observed censoring pattern are computed. These are compared with estimates where midpoints are substituted for left- and interval-censored data (midpoint estimator, or MDE). Simulation studies indicate that for relatively large samples there are many instances when the MLE is superior to the MDE. For samples where the hazard rate is flat or nearly so, or where the percentage of interval-censored data is small, the MDE is adequate. An example using Framingham Heart Study data is discussed.  相似文献   

3.
In certain-life experiments it may happen that the failure rate may change after some initial period of time. Thie meane that the failure rate during certain interval of time and that of thereafter will differ significantly. Thie suggests a failure model in which one failure rate is assumed to be operative during period of time, say[0, T]. and another failure rate ie operative thereafter, that is after time T. In thie paper, Bayesian point estimates of these fnilure rates are obtained when underlying probability law law is exponential and T is known.  相似文献   

4.
Sun L  Kim YJ  Sun J 《Biometrics》2004,60(3):637-643
Doubly censored failure time data arise when the survival time of interest is the elapsed time between two related events and observations on occurrences of both events could be censored. Regression analysis of doubly censored data has recently attracted considerable attention and for this a few methods have been proposed (Kim et al., 1993, Biometrics 49, 13-22; Sun et al., 1999, Biometrics 55, 909-914; Pan, 2001, Biometrics 57, 1245-1250). However, all of the methods are based on the proportional hazards model and it is well known that the proportional hazards model may not fit failure time data well sometimes. This article investigates regression analysis of such data using the additive hazards model and an estimating equation approach is proposed for inference about regression parameters of interest. The proposed method can be easily implemented and the properties of the proposed estimates of regression parameters are established. The method is applied to a set of doubly censored data from an AIDS cohort study.  相似文献   

5.
Sun J  Liao Q  Pagano M 《Biometrics》1999,55(3):909-914
In many epidemiological studies, the survival time of interest is the elapsed time between two related events, the originating event and the failure event, and the times of the occurrences of both events are right or interval censored. We discuss the regression analysis of such studies and a simple estimating equation approach is proposed under the proportional hazards model. The method can easily be implemented and does not involve any iteration among unknown parameters, as full likelihood approaches proposed in the literature do. The asymptotic properties of the proposed regression coefficient estimates are derived and an AIDS cohort study is analyzed to illustrate the proposed approach.  相似文献   

6.
7.
Naskar M  Das K  Ibrahim JG 《Biometrics》2005,61(3):729-737
A very general class of multivariate life distributions is considered for analyzing failure time clustered data that are subject to censoring and multiple modes of failure. Conditional on cluster-specific quantities, the joint distribution of the failure time and event indicator can be expressed as a mixture of the distribution of time to failure due to a certain type (or specific cause), and the failure type distribution. We assume here the marginal probabilities of various failure types are logistic functions of some covariates. The cluster-specific quantities are subject to some unknown distribution that causes frailty. The unknown frailty distribution is modeled nonparametrically using a Dirichlet process. In such a semiparametric setup, a hybrid method of estimation is proposed based on the i.i.d. Weighted Chinese Restaurant algorithm that helps us generate observations from the predictive distribution of the frailty. The Monte Carlo ECM algorithm plays a vital role for obtaining the estimates of the parameters that assess the extent of the effects of the causal factors for failures of a certain type. A simulation study is conducted to study the consistency of our methodology. The proposed methodology is used to analyze a real data set on HIV infection of a cohort of female prostitutes in Senegal.  相似文献   

8.
Once hunted to the brink of extinction, humpback whales (Megaptera novaeangliae) in the North Atlantic have recently been increasing in numbers. However, uncertain information on past abundance makes it difficult to assess the extent of the recovery in this species. While estimates of pre-exploitation abundance based upon catch data suggest the population might be approaching pre-whaling numbers, estimates based on mtDNA genetic diversity suggest they are still only a fraction of their past abundance levels. The difference between the two estimates could be accounted for by inaccuracies in the catch record, by uncertainties surrounding the genetic estimate, or by differences in the timescale to which the two estimates apply. Here we report an estimate of long-term population size based on nuclear gene diversity. We increase the reliability of our genetic estimate by increasing the number of loci, incorporating uncertainty in each parameter and increasing sampling across the geographic range. We report an estimate of long-term population size in the North Atlantic humpback of ~112,000 individuals (95 % CI 45,000–235,000). This value is 2–3 fold higher than estimates based upon catch data. This persistent difference between estimates parallels difficulties encountered by population models in explaining the historical crash of North Atlantic humpback whales. The remaining discrepancy between genetic and catch-record values, and the failure of population models, highlights a need for continued evaluation of whale population growth and shifts over time, and continued caution about changing the conservation status of this population.  相似文献   

9.
Oleson JJ  He CZ 《Biometrics》2004,60(1):50-59
Sampling units that do not answer a survey may dramatically affect the estimation results of interest. The response may even be conditional on the outcome of interest in the survey. If estimates are found using only those who responded, the estimate may be biased, known as nonresponse bias. We are interested in finding estimates of success rates from a survey. We begin by looking at two current Bayesian approaches to treating nonresponse in a hierarchical model. However, these approaches do not consider possible spatial correlations between domains for either success rate or response rate. We build a Bayesian hierarchical spatial model to explicitly estimate the success rate, response rate given success, and response rate given failure. The success rates in the domains of the survey are allowed to be spatially correlated. We also allow spatial dependence between domains in both response rate given success and response rate given failure. Spatial dependence is induced by a common latent spatial structure between the two conditional response rates. We use the 1998 Missouri Turkey Hunting Survey to illustrate this methodology. We find significant spatial correlation in the success rates and incorporating nonrespondents has an impact on the success rate estimates.  相似文献   

10.
The methods ofManly (1973),Manly (1975) andManly (1977) for estimating survival rates and relative survival rates from recapture data have been compared by computer simulation. In the simulations batches of two types of animal were “released” at one point in “time” and recapture samples were taken at “daily” intervals from then on. The various methods of estimation were then used to estimate, the daily survival rates of type 1 and type 2 animals, and also the survival rate of the type 2 animals relative to the type 1 animals. Simulation experiments were designed to examine (a) the bias in estimates, (b) the relative precision of different methods of estimation, (c) the validity of confidence intervals for true parameter values, and (d) the effect on estimates of the failure of certain assumptions.  相似文献   

11.
This work is motivated by clinical trials in chronic heart failure disease, where treatment has effects both on morbidity (assessed as recurrent non‐fatal hospitalisations) and on mortality (assessed as cardiovascular death, CV death). Recently, a joint frailty proportional hazards model has been proposed for these kind of efficacy outcomes to account for a potential association between the risk rates for hospital admissions and CV death. However, more often clinical trial results are presented by treatment effect estimates that have been derived from marginal proportional hazards models, that is, a Cox model for mortality and an Andersen–Gill model for recurrent hospitalisations. We show how these marginal hazard ratios and their estimates depend on the association between the risk processes, when these are actually linked by shared or dependent frailty terms. First we derive the marginal hazard ratios as a function of time. Then, applying least false parameter theory, we show that the marginal hazard ratio estimate for the hospitalisation rate depends on study duration and on parameters of the underlying joint frailty model. In particular, we identify parameters, for example the treatment effect on mortality, that determine if the marginal hazard ratio estimate for hospitalisations is smaller, equal or larger than the conditional one. How this affects rejection probabilities is further investigated in simulation studies. Our findings can be used to interpret marginal hazard ratio estimates in heart failure trials and are illustrated by the results of the CHARM‐Preserved trial (where CHARM is the ‘Candesartan in Heart failure Assessment of Reduction in Mortality and morbidity’ programme).  相似文献   

12.
Patients who have undergone renal transplantation are monitored longitudinally at irregular time intervals over 10 years or more. This yields a set of biochemical and physiological markers containing valuable information to anticipate a failure of the graft. A general linear, generalized linear, or nonlinear mixed model is used to describe the longitudinal profile of each marker. To account for the correlation between markers, the univariate mixed models are combined into a multivariate mixed model (MMM) by specifying a joint distribution for the random effects. Due to the high number of markers, a pairwise modeling strategy, where all possible pairs of bivariate mixed models are fitted, is used to obtain parameter estimates for the MMM. These estimates are used in a Bayes rule to obtain, at each point in time, the prognosis for long-term success of the transplant. It is shown that allowing the markers to be correlated can improve this prognosis.  相似文献   

13.
This study compared spontaneous baroreflex sensitivity (BRS) estimates obtained from an identical set of data by 11 European centers using different methods and procedures. Noninvasive blood pressure (BP) and ECG recordings were obtained in 21 subjects, including 2 subjects with established baroreflex failure. Twenty-one estimates of BRS were obtained by methods including the two main techniques of BRS estimates, i.e., the spectral analysis (11 procedures) and the sequence method (7 procedures) but also one trigonometric regressive spectral analysis method (TRS), one exogenous model with autoregressive input method (X-AR), and one Z method. With subjects in a supine position, BRS estimates obtained with calculations of alpha-coefficient or gain of the transfer function in both the low-frequency band or high-frequency band, TRS, and sequence methods gave strongly related results. Conversely, weighted gain, X-AR, and Z exhibited lower agreement with all the other techniques. In addition, the use of mean BP instead of systolic BP in the sequence method decreased the relationships with the other estimates. Some procedures were unable to provide results when BRS estimates were expected to be very low in data sets (in patients with established baroreflex failure). The failure to provide BRS values was due to setting of algorithmic parameters too strictly. The discrepancies between procedures show that the choice of parameters and data handling should be considered before BRS estimation. These data are available on the web site (http://www.cbi.polimi.it/glossary/eurobavar.html) to allow the comparison of new techniques with this set of results.  相似文献   

14.
Semiparametric models for cumulative incidence functions   总被引:1,自引:0,他引:1  
Bryant J  Dignam JJ 《Biometrics》2004,60(1):182-190
In analyses of time-to-failure data with competing risks, cumulative incidence functions may be used to estimate the time-dependent cumulative probability of failure due to specific causes. These functions are commonly estimated using nonparametric methods, but in cases where events due to the cause of primary interest are infrequent relative to other modes of failure, nonparametric methods may result in rather imprecise estimates for the corresponding subdistribution. In such cases, it may be possible to model the cause-specific hazard of primary interest parametrically, while accounting for the other modes of failure using nonparametric estimators. The cumulative incidence estimators so obtained are simple to compute and are considerably more efficient than the usual nonparametric estimator, particularly with regard to interpolation of cumulative incidence at early or intermediate time points within the range of data used to fit the function. More surprisingly, they are often nearly as efficient as fully parametric estimators. We illustrate the utility of this approach in the analysis of patients treated for early stage breast cancer.  相似文献   

15.
Yin G  Cai J 《Biometrics》2005,61(1):151-161
As an alternative to the mean regression model, the quantile regression model has been studied extensively with independent failure time data. However, due to natural or artificial clustering, it is common to encounter multivariate failure time data in biomedical research where the intracluster correlation needs to be accounted for appropriately. For right-censored correlated survival data, we investigate the quantile regression model and adapt an estimating equation approach for parameter estimation under the working independence assumption, as well as a weighted version for enhancing the efficiency. We show that the parameter estimates are consistent and asymptotically follow normal distributions. The variance estimation using asymptotic approximation involves nonparametric functional density estimation. We employ the bootstrap and perturbation resampling methods for the estimation of the variance-covariance matrix. We examine the proposed method for finite sample sizes through simulation studies, and illustrate it with data from a clinical trial on otitis media.  相似文献   

16.
Species abundance data are critical for testing ecological theory, but obtaining accurate empirical estimates for many taxa is challenging. Proxies for species abundance can help researchers circumvent time and cost constraints that are prohibitive for long‐term sampling. Under simple demographic models, genetic diversity is expected to correlate with census size, such that genome‐wide heterozygosity may provide a surrogate measure of species abundance. We tested whether nucleotide diversity is correlated with long‐term estimates of abundance, occupancy and degree of ecological specialization in a diverse lizard community from arid Australia. Using targeted sequence capture, we obtained estimates of genomic diversity from 30 species of lizards, recovering an average of 5,066 loci covering 3.6 Mb of DNA sequence per individual. We compared measures of individual heterozygosity to a metric of habitat specialization to investigate whether ecological preference exerts a measurable effect on genetic diversity. We find that heterozygosity is significantly correlated with species abundance and occupancy, but not habitat specialization. Demonstrating the power of genomic sampling, the correlation between heterozygosity and abundance/occupancy emerged from considering just one or two individuals per species. However, genetic diversity does no better at predicting abundance than a single day of traditional sampling in this community. We conclude that genetic diversity is a useful proxy for regional‐scale species abundance and occupancy, but a large amount of unexplained variation in heterozygosity suggests additional constraints or a failure of ecological sampling to adequately capture variation in true population size.  相似文献   

17.
Parker CB  Delong ER 《Biometrics》2000,56(4):996-1001
Changes in maximum likelihood parameter estimates due to deletion of individual observations are useful statistics, both for regression diagnostics and for computing robust estimates of covariance. For many likelihoods, including those in the exponential family, these delete-one statistics can be approximated analytically from a one-step Newton-Raphson iteration on the full maximum likelihood solution. But for general conditional likelihoods and the related Cox partial likelihood, the one-step method does not reduce to an analytic solution. For these likelihoods, an alternative analytic approximation that relies on an appropriately augmented design matrix has been proposed. In this paper, we extend the augmentation approach to explicitly deal with discrete failure-time models. In these models, an individual subject may contribute information at several time points, thereby appearing in multiple risk sets before eventually experiencing a failure or being censored. Our extension also allows the covariates to be time dependent. The new augmentation requires no additional computational resources while improving results.  相似文献   

18.
Matsuura M  Eguchi S 《Biometrics》2005,61(2):559-566
In a failure time analysis, we sometimes observe additional study subjects who enter during the study period. These late entries are treated as left-truncated data in the statistical literature. However, with real data, there is a substantial possibility that the delayed entries may have extremely different hazards compared to the other standard subjects. We focus on a situation in which such entry bias might arise in the analysis of survival data. The purpose of the present article is to develop an appropriate methodology for making inference about data including late entries. We construct a model that includes parameters for the effect of delayed entry bias having no specification for the distribution of entry time. We also discuss likelihood inference based on this model and derive the asymptotic behavior of estimates. A simulation study is conducted for a finite sample size in order to compare the analysis results using our method with those using the standard method, where independence between entry time and failure time is assumed. We apply this method to mortality analysis among atomic bomb survivors defined in a geographical study region.  相似文献   

19.
Pan W 《Biometrics》2000,56(1):199-203
We propose a general semiparametric method based on multiple imputation for Cox regression with interval-censored data. The method consists of iterating the following two steps. First, from finite-interval-censored (but not right-censored) data, exact failure times are imputed using Tanner and Wei's poor man's or asymptotic normal data augmentation scheme based on the current estimates of the regression coefficient and the baseline survival curve. Second, a standard statistical procedure for right-censored data, such as the Cox partial likelihood method, is applied to imputed data to update the estimates. Through simulation, we demonstrate that the resulting estimate of the regression coefficient and its associated standard error provide a promising alternative to the nonparametric maximum likelihood estimate. Our proposal is easily implemented by taking advantage of existing computer programs for right-censored data.  相似文献   

20.
Measurement error and estimates of population extinction risk   总被引:2,自引:0,他引:2  
It is common to estimate the extinction probability for a vulnerable population using methods that are based on the mean and variance of the long‐term population growth rate. The numerical values of these two parameters are estimated from time series of population censuses. However, the proportion of a population that is registered at each census is typically not constant but will vary among years because of stochastic factors such as weather conditions at the time of sampling. Here, we analyse how such sampling errors influence estimates of extinction risk and find sampling errors to produce two opposite effects. Measurement errors lead to an exaggerated overall variance, but also introduce negative autocorrelations in the time series (which means that estimates of annual growth rates tend to alternate in size). If time series data are treated properly these two effects exactly counter balance. We advocate routinely incorporating a measure of among year correlations in estimating population extinction risk.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号