首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is not uncommon that we may encounter a randomized clinical trial (RCT) in which there are confounders which are needed to control and patients who do not comply with their assigned treatments. In this paper, we concentrate our attention on interval estimation of the proportion ratio (PR) of probabilities of response between two treatments in a stratified noncompliance RCT. We have developed and considered five asymptotic interval estimators for the PR, including the interval estimator using the weighted-least squares (WLS) estimator, the interval estimator using the Mantel-Haenszel type of weight, the interval estimator derived from Fieller's Theorem with the corresponding WLS optimal weight, the interval estimator derived from Fieller's Theorem with the randomization-based optimal weight, and the interval estimator based on a stratified two-sample proportion test with the optimal weight suggested elsewhere. To evaluate and compare the finite sample performance of these estimators, we apply Monte Carlo simulation to calculate the coverage probability and average length in a variety of situations. We discuss the limitation and usefulness for each of these interval estimators, as well as include a general guideline about which estimators may be used for given various situations.  相似文献   

2.
A cause-specific cumulative incidence function (CIF) is the probability of failure from a specific cause as a function of time. In randomized trials, a difference of cause-specific CIFs (treatment minus control) represents a treatment effect. Cause-specific CIF in each intervention arm can be estimated based on the usual non-parametric Aalen–Johansen estimator which generalizes the Kaplan–Meier estimator of CIF in the presence of competing risks. Under random censoring, asymptotically valid Wald-type confidence intervals (CIs) for a difference of cause-specific CIFs at a specific time point can be constructed using one of the published variance estimators. Unfortunately, these intervals can suffer from substantial under-coverage when the outcome of interest is a rare event, as may be the case for example in the analysis of uncommon adverse events. We propose two new approximate interval estimators for a difference of cause-specific CIFs estimated in the presence of competing risks and random censoring. Theoretical analysis and simulations indicate that the new interval estimators are superior to the Wald CIs in the sense of avoiding substantial under-coverage with rare events, while being equivalent to the Wald CIs asymptotically. In the absence of censoring, one of the two proposed interval estimators reduces to the well-known Agresti–Caffo CI for a difference of two binomial parameters. The new methods can be easily implemented with any software package producing point and variance estimates for the Aalen–Johansen estimator, as illustrated in a real data example.  相似文献   

3.
The relative risk (RR) is one of the most frequently used indices to measure the strength of association between a disease and a risk factor in etiological studies or the efficacy of an experimental treatment in clinical trials. In this paper, we concentrate attention on interval estimation of RR for sparse data, in which we have only a few patients per stratum, but a moderate or large number of strata. We consider five asymptotic interval estimators for RR, including a weighted least-squares (WLS) interval estimator with an ad hoc adjustment procedure for sparse data, an interval estimator proposed elsewhere for rare events, an interval estimator based on the Mantel-Haenszel (MH) estimator with a logarithmic transformation, an interval estimator calculated from a quadratic equation, and an interval estimator derived from the ratio estimator with a logarithmic transformation. On the basis of Monte Carlo simulations, we evaluate and compare the performance of these five interval estimators in a variety of situations. We note that, except for the cases in which the underlying common RR across strata is around 1, using the WLS interval estimator with the adjustment procedure for sparse data can be misleading. We note further that using the interval estimator suggested elsewhere for rare events tends to be conservative and hence leads to loss of efficiency. We find that the other three interval estimators can consistently perform well even when the mean number of patients for a given treatment is approximately 3 patients per stratum and the number of strata is as small as 20. Finally, we use a mortality data set comparing two chemotherapy treatments in patients with multiple myeloma to illustrate the use of the estimators discussed in this paper.  相似文献   

4.
Guan Y 《Biometrics》2011,67(3):926-936
Summary We introduce novel regression extrapolation based methods to correct the often large bias in subsampling variance estimation as well as hypothesis testing for spatial point and marked point processes. For variance estimation, our proposed estimators are linear combinations of the usual subsampling variance estimator based on subblock sizes in a continuous interval. We show that they can achieve better rates in mean squared error than the usual subsampling variance estimator. In particular, for n×n observation windows, the optimal rate of n?2 can be achieved if the data have a finite dependence range. For hypothesis testing, we apply the proposed regression extrapolation directly to the test statistics based on different subblock sizes, and therefore avoid the need to conduct bias correction for each element in the covariance matrix used to set up the test statistics. We assess the numerical performance of the proposed methods through simulation, and apply them to analyze a tropical forest data set.  相似文献   

5.
G. Asteris  S. Sarkar 《Genetics》1996,142(1):313-326
Bayesian procedures are developed for estimating mutation rates from fluctuation experiments. Three Bayesian point estimators are compared with four traditional ones using the results of 10,000 simulated experiments. The Bayesian estimators were found to be at least as efficient as the best of the previously known estimators. The best Bayesian estimator is one that uses (1/m(2)) as the prior probability density function and a quadratic loss function. The advantage of using these estimators is most pronounced when the number of fluctuation test tubes is small. Bayesian estimation allows the incorporation of prior knowledge about the estimated parameter, in which case the resulting estimators are the most efficient. It enables the straightforward construction of confidence intervals for the estimated parameter. The increase of efficiency with prior information and the narrowing of the confidence intervals with additional experimental results are investigated. The results of the simulations show that any potential inaccuracy of estimation arising from lumping together all cultures with more than n mutants (the jackpots) almost disappears at n = 70 (provided that the number of mutations in a culture is low). These methods are applied to a set of experimental data to illustrate their use.  相似文献   

6.
In this article, we provide a method of estimation for the treatment effect in the adaptive design for censored survival data with or without adjusting for risk factors other than the treatment indicator. Within the semiparametric Cox proportional hazards model, we propose a bias-adjusted parameter estimator for the treatment coefficient and its asymptotic confidence interval at the end of the trial. The method for obtaining an asymptotic confidence interval and point estimator is based on a general distribution property of the final test statistic from the weighted linear rank statistics at the interims with or without considering the nuisance covariates. The computation of the estimates is straightforward. Extensive simulation studies show that the asymptotic confidence intervals have reasonable nominal probability of coverage, and the proposed point estimators are nearly unbiased with practical sample sizes.  相似文献   

7.
We have developed four asymptotic interval estimators in closed forms for the gamma correlation under stratified random sampling, including the confidence interval based on the most commonly used weighted‐least‐squares (WLS) approach (CIWLS), the confidence interval calculated from the Mantel‐Haenszel (MH) type estimator with the Fisher‐type transformation (CIMHT), the confidence interval using the fundamental idea of Fieller's Theorem (CIFT) and the confidence interval derived from a monotonic function of the WLS estimator of Agresti's α with the logarithmic transformation (MWLSLR). To evaluate the finite‐sample performance of these four interval estimators and note the possible loss of accuracy in application of both Wald's confidence interval and MWLSLR using pooled data without accounting for stratification, we employ Monte Carlo simulation. We use the data taken from a general social survey studying the association between the income level and job satisfaction with strata formed by genders in black Americans published elsewhere to illustrate the practical use of these interval estimators.  相似文献   

8.
J O'Quigley 《Biometrics》1992,48(3):853-862
The problem of point and interval estimation following a Phase I trial, carried out according to the scheme outlined by O'Quigley, Pepe, and Fisher (1990, Biometrics 46, 33-48), is investigated. A reparametrization of the model suggested in this earlier work can be seen to be advantageous in some circumstances. Maximum likelihood estimators, Bayesian estimators, and one-step estimators are considered. The continual reassessment method imposes restrictions on the sample space such that it is not possible for confidence intervals to achieve exact coverage properties, however large a sample is taken. Nonetheless, our simulations, based on a small finite sample of 20, not atypical in studies of this type, indicate that the calculated intervals are useful in most practical cases and achieve coverage very close to nominal levels in a very wide range of situations. The relative merits of the different estimators and their associated confidence intervals, viewed from a frequentist perspective, are discussed.  相似文献   

9.
The estimation of an evoked potential from an ensemble of observations is considered from the statistical point of view. In comparing different estimators it appears that the common mean square loss function is inappropriate in evoked potential studies. A modified loss function is proposed which is insensitive to overall scalling factors in the estimate. Using that loss function the properties of different estimators are explored and lines are indicated along which the estimation can be improved. Simulation examples illustrate the theoretical concepts. It is concluded that it is usually possible to improve on averaging, although comparison of the results of more advanced techniques to those of averaging always remains essential to avoid possibly erroneous interpretations.Until August 1984 visiting associate professor in Electrical Engineering and Neurology, University of Wisconsin, Department of Electrical and Computer Engineering, 1415 Johnson Drive, Madison, WI 53706, USA  相似文献   

10.
This paper discusses interval estimation of the simple difference (SD) between the proportions of the primary infection and the secondary infection, given the primary infection, by developing three asymptotic interval estimators using Wald's test statistic, the likelihood‐ratio test, and the basic principle of Fieller's theorem. This paper further evaluates and compares the performance of these interval estimators with respect to the coverage probability and the expected length of the resulting confidence intervals. This paper finds that the asymptotic confidence interval using the likelihood ratio test consistently performs well in all situations considered here. When the underlying SD is within 0.10 and the total number of subjects is not large (say, 50), this paper further finds that the interval estimators using Fieller's theorem would be preferable to the estimator using the Wald's test statistic if the primary infection probability were moderate (say, 0.30), but the latter is preferable to the former if this probability were large (say, 0.80). When the total number of subjects is large (say, ≥200), all the three interval estimators perform well in almost all situations considered in this paper. In these cases, for simplicity, we may apply either of the two interval estimators using Wald's test statistic or Fieller's theorem without losing much accuracy and efficiency as compared with the interval estimator using the asymptotic likelihood ratio test.  相似文献   

11.
Robert M. Dorazio 《Biometrics》2012,68(4):1303-1312
Summary Several models have been developed to predict the geographic distribution of a species by combining measurements of covariates of occurrence at locations where the species is known to be present with measurements of the same covariates at other locations where species occurrence status (presence or absence) is unknown. In the absence of species detection errors, spatial point‐process models and binary‐regression models for case‐augmented surveys provide consistent estimators of a species’ geographic distribution without prior knowledge of species prevalence. In addition, these regression models can be modified to produce estimators of species abundance that are asymptotically equivalent to those of the spatial point‐process models. However, if species presence locations are subject to detection errors, neither class of models provides a consistent estimator of covariate effects unless the covariates of species abundance are distinct and independently distributed from the covariates of species detection probability. These analytical results are illustrated using simulation studies of data sets that contain a wide range of presence‐only sample sizes. Analyses of presence‐only data of three avian species observed in a survey of landbirds in western Montana and northern Idaho are compared with site‐occupancy analyses of detections and nondetections of these species.  相似文献   

12.
Linear‐mixed models are frequently used to obtain model‐based estimators in small area estimation (SAE) problems. Such models, however, are not suitable when the target variable exhibits a point mass at zero, a highly skewed distribution of the nonzero values and a strong spatial structure. In this paper, a SAE approach for dealing with such variables is suggested. We propose a two‐part random effects SAE model that includes a correlation structure on the area random effects that appears in the two parts and incorporates a bivariate smooth function of the geographical coordinates of units. To account for the skewness of the distribution of the positive values of the response variable, a Gamma model is adopted. To fit the model, to get small area estimates and to evaluate their precision, a hierarchical Bayesian approach is used. The study is motivated by a real SAE problem. We focus on estimation of the per‐farm average grape wine production in Tuscany, at subregional level, using the Farm Structure Survey data. Results from this real data application and those obtained by a model‐based simulation experiment show a satisfactory performance of the suggested SAE approach.  相似文献   

13.
The use of autoregressive modelling has acquired great importance in time series analysis and in principle it may also be applicable in the spectral analysis of point processes with similar advantages over the nonparametric approach. Most of the methods used for autoregressive spectral analysis require positive semidefinite estimates for the covariance function, while current methods for the estimation of the covariance density function of a point process given a realization over the interval [0,T] do not guarantee a positive semidefinite estimate. This paper discusses methods for the estimation of the covariance density and conditional intensity function of point processes and present alternative computational efficient estimation algorithms leading always to positive semidefinite estimates, therefore adequate for autoregressive spectral analysis. Autoregressive spectral modelling of point processes from Yule-Walker type equations and Levinson recursion combined with the minimum AIC or CAT principle is illustrated with neurobiological data.  相似文献   

14.
When the sample size is not large or when the underlying disease is rare, to assure collection of an appropriate number of cases and to control the relative error of estimation, one may employ inverse sampling, in which one continues sampling subjects until one obtains exactly the desired number of cases. This paper focuses discussion on interval estimation of the simple difference between two proportions under independent inverse sampling. This paper develops three asymptotic interval estimators on the basis of the maximum likelihood estimator (MLE), the uniformly minimum variance unbiased estimator (UMVUE), and the asymptotic likelihood ratio test (ALRT). To compare the performance of these three estimators, this paper calculates the coverage probability and the expected length of the resulting confidence intervals on the basis of the exact distribution. This paper finds that when the underlying proportions of cases in both two comparison populations are small or moderate (≤0.20), all three asymptotic interval estimators developed here perform reasonably well even for the pre-determined number of cases as small as 5. When the pre-determined number of cases is moderate or large (≥50), all three estimators are essentially equivalent in all the situations considered here. Because application of the two interval estimators derived from the MLE and the UMVUE does not involve any numerical iterative procedure needed in the ALRT, for simplicity we may use these two estimators without losing efficiency.  相似文献   

15.
S Greenland 《Biometrics》1989,45(1):183-191
Mickey and Elashoff (1985, Biometrics 41, 623-635) gave an extension of Mantel-Haenszel estimation to log-linear models for 2 x J x K tables. Their extension yields two generalizations of the Mantel-Haenszel odds ratio estimator to K 2 x J tables. This paper provides variance and covariance estimators for these generalized Mantel-Haenszel estimators that are dually consistent (i.e., consistent in both large strata and sparse data), and presents comparisons of the efficiency of the generalized Mantel-Haenszel estimators.  相似文献   

16.
In simple regression, two serious problems with the ordinary least squares (OLS) estimator are that its efficiency can be relatively poor when the error term is normal but heteroscedastic, and the usual confidence interval for the slope can have highly unsatisfactory probability coverage. When the error term is nonnormal, these problems become exacerbated. Two other concerns are that the OLS estimator has an unbounded influence function and a breakdown point of zero. Wilcox (1996) compared several estimators when there is heteroscedasticity and found two that have relatively good efficiency and simultaneously provide protection against outliers: an M-estimator with Schweppe weights and an estimator proposed by Cohen, Dalal and Tukey (1993). However, the M-estimator can handle only one outlier in the X-domain or among the Y values, and among the methods considered by Wilcox for computing confidence intervals for the slope, none performed well when working with the Cohen-Dalal-Tukey estimator. This note points out that the small-sample efficiency of theTheil-Sen estimator competes well with the estimators considered by Wilcox, and a method for computing a confidence interval was found that performs well in simulations. The Theil-Sen estimator has a reasonably high breakdown point, a bounded influence function, and in some cases its small-sample efficiency offers a substantial advantage over all of the estimators compared in Wilcox (1996).  相似文献   

17.
Semiparametric smoothing methods are usually used to model longitudinal data, and the interest is to improve efficiency for regression coefficients. This paper is concerned with the estimation in semiparametric varying‐coefficient models (SVCMs) for longitudinal data. By the orthogonal projection method, local linear technique, quasi‐score estimation, and quasi‐maximum likelihood estimation, we propose a two‐stage orthogonality‐based method to estimate parameter vector, coefficient function vector, and covariance function. The developed procedures can be implemented separately and the resulting estimators do not affect each other. Under some mild conditions, asymptotic properties of the resulting estimators are established explicitly. In particular, the asymptotic behavior of the estimator of coefficient function vector at the boundaries is examined. Further, the finite sample performance of the proposed procedures is assessed by Monte Carlo simulation experiments. Finally, the proposed methodology is illustrated with an analysis of an acquired immune deficiency syndrome (AIDS) dataset.  相似文献   

18.
Summary Methods for the statistical analysis of stationary spatial point process data are now well established, methods for nonstationary processes less so. One of many sources of nonstationary point process data is a case–control study in environmental epidemiology. In that context, the data consist of a realization of each of two spatial point processes representing the locations, within a specified geographical region, of individual cases of a disease and of controls drawn at random from the population at risk. In this article, we extend work by Baddeley, Møller, and Waagepetersen (2000, Statistica Neerlandica 54 , 329–350) concerning estimation of the second‐order properties of a nonstationary spatial point process. First, we show how case–control data can be used to overcome the problems encountered when using the same data to estimate both a spatially varying intensity and second‐order properties. Second, we propose a semiparametric method for adjusting the estimate of intensity so as to take account of explanatory variables attached to the cases and controls. Our primary focus is estimation, but we also propose a new test for spatial clustering that we show to be competitive with existing tests. We describe an application to an ecological study in which juvenile and surviving adult trees assume the roles of controls and cases.  相似文献   

19.
D Y Lin  L J Wei  D L DeMets 《Biometrics》1991,47(4):1399-1408
This paper considers clinical trials comparing two treatments with dichotomous responses where the data are examined periodically for early evidence of treatment difference. The existing group sequential methods for such trials are based on the large-sample normal approximation to the joint distribution of the estimators of treatment difference over interim analyses. We demonstrate through extensive numerical studies that, for small and even moderate-sized trials, these approximate procedures may lead to tests with supranominal size (mainly when unpooled estimators of variance are used) and confidence intervals with under-nominal coverage probability. We then study exact methods for group sequential testing, repeated interval estimation, and interval estimation following sequential testing. The new procedures can accommodate any treatment allocation rules. An example using real data is provided.  相似文献   

20.
Knapp SJ  Bridges-Jr WC  Yang MH 《Genetics》1989,121(4):891-898
Statistical methods have not been described for comparing estimates of family-mean heritability (H) or expected selection response (R), nor have consistently valid methods been described for estimating R intervals. Nonparametric methods, e.g., delete-one jackknifing, may be used to estimate variances, intervals, and hypothesis test statistics in estimation problems where parametric methods are unsuitable, nonrobust, or undefinable. Our objective was to evaluate normal-approximation jackknife interval estimators for H and R using Monte Carlo simulation. Simulations were done using normally distributed within-family effects and normally, uniformly, and exponentially distributed between-family effects. Realized coverage probabilities for jackknife interval (2) and parametric interval (5) for H were not significantly different from stated probabilities when between-family effects were normally distributed. Coverages for jackknife intervals (3) and (4) for R were not significantly different from stated coverages when between-family effects were normally distributed. Coverages for interval (3) for R were occasionally significantly less than stated when between-family effects were uniformly or exponentially distributed. Coverages for interval (2) for H were occasionally significantly less than stated when between-family effects were exponentially distributed. Thus, intervals (3) and (4) for R and (2) for H were robust. Means of analysis of variance estimates of R were often significantly less than parametric values when the number of families evaluated was 60 or less. Means of analysis of variance estimates of H were consistently significantly less than parametric values. Means of jackknife estimates of H calculated from log transformed point estimates and R calculated from untransformed or log transformed point estimates were not significantly different from parametric values. Thus, jackknife estimators of H and R were unbiased. Delete-one jackknifing is a robust, versatile, and effective statistical method when applied to estimation problems involving variance functions. Jackknifing is especially valuable in hypothesis test estimation problems where the objective is comparing estimates from different populations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号