首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary Nested case–control (NCC) design is a popular sampling method in large epidemiological studies for its cost effectiveness to investigate the temporal relationship of diseases with environmental exposures or biological precursors. Thomas' maximum partial likelihood estimator is commonly used to estimate the regression parameters in Cox's model for NCC data. In this article, we consider a situation in which failure/censoring information and some crude covariates are available for the entire cohort in addition to NCC data and propose an improved estimator that is asymptotically more efficient than Thomas' estimator. We adopt a projection approach that, heretofore, has only been employed in situations of random validation sampling and show that it can be well adapted to NCC designs where the sampling scheme is a dynamic process and is not independent for controls. Under certain conditions, consistency and asymptotic normality of the proposed estimator are established and a consistent variance estimator is also developed. Furthermore, a simplified approximate estimator is proposed when the disease is rare. Extensive simulations are conducted to evaluate the finite sample performance of our proposed estimators and to compare the efficiency with Thomas' estimator and other competing estimators. Moreover, sensitivity analyses are conducted to demonstrate the behavior of the proposed estimator when model assumptions are violated, and we find that the biases are reasonably small in realistic situations. We further demonstrate the proposed method with data from studies on Wilms' tumor.  相似文献   

2.
The purpose of the study is to estimate the population size under a homogeneous truncated count model and under model contaminations via the Horvitz‐Thompson approach on the basis of a count capture‐recapture experiment. The proposed estimator is based on a mixture of zero‐truncated Poisson distributions. The benefit of using the proposed model is statistical inference of the long‐tailed or skewed distributions and the concavity of the likelihood function with strong results available on the nonparametric maximum likelihood estimator (NPMLE). The results of comparisons, for finding the appropriate estimator among McKendrick's, Mantel‐Haenszel's, Zelterman's, Chao's, the maximum likelihood, and the proposed methods in a simulation study, reveal that under model contaminations the proposed estimator provides the best choice according to its smallest bias and smallest mean square error for a situation of sufficiently large population sizes and the further results show that the proposed estimator performs well even for a homogeneous situation. The empirical examples, containing the cholera epidemic in India based on homogeneity and the heroin user data in Bangkok 2002 based on heterogeneity, are fitted with an excellent goodness‐of‐fit of the models and the confidence interval estimations may also be of considerable interest. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

3.
Y. X. Fu 《Genetics》1994,138(4):1375-1386
Mutations resulting in segregating sites of a sample of DNA sequences can be classified by size and type and the frequencies of mutations of different sizes and types can be inferred from the sample. A framework for estimating the essential parameter θ = 4Nu utilizing the frequencies of mutations of various sizes and types is developed in this paper, where N is the effective size of a population and μ is mutation rate per sequence per generation. The framework is a combination of coalescent theory, general linear model and Monte-Carlo integration, which leads to two new estimators θ(ξ) and θ(η) as well as a general Watterson''s estimator θ(K) and a general Tajima''s estimator θ(π). The greatest strength of the framework is that it can be used under a variety of population models. The properties of the framework and the four estimators θ(K), θ(π), θ(ξ) and θ(η) are investigated under three important population models: the neutral Wright-Fisher model, the neutral model with recombination and the neutral Wright''s finite-islands model. Under all these models, it is shown that θ(ξ) is the best estimator among the four even when recombination rate or migration rate has to be estimated. Under the neutral Wright-Fisher model, it is shown that the new estimator θ(ξ) has a variance close to a lower bound of variances of all unbiased estimators of θ which suggests that θ(ξ) is a very efficient estimator.  相似文献   

4.
A class of ratio cum product-type estimator is proposed in case of double sampling in the present paper. Its bias and variance to the first order of approximation are obtained. For an appropriate weight ‘a’ and a good range of α-values, it is found that the proposed estimator is more efficient than the set of estimator viz., simple mean estimator, usual ratio and product estimators, SRIVASTAVA 's estimator (1967), CHAKARBARTY 's estimator and product-type estimator, which are in fact the particular cases of it. The proposed estimator is as efficient as linear regression estimator in double sampling at optimum value of α.  相似文献   

5.
A new modification of Berkson's minimum logit chi-squared estimator in simple linear logistic regression is suggested in order to achieve reduction of first order bias of the estimator as well as in the model. Furthermore, unlike estimators currently available, our procedure is quite simple to apply in practice and is valid even in the presence of zero frequencies in the table.  相似文献   

6.
The kappa index is usually used for measuring the agreement between two observers when the scale is nominal. A modification of Cohen's kappa index was given by Krauth. The new estimator was biased and its large sample variance was obtained. An alternative estimator is developed here It is a ratio estimator and its mean square error is derived. A comparison with Cohen's estimator and Krauth's one is given by the examples used in the paper of Krauth.  相似文献   

7.
Important aspects of population evolution have been investigated using nucleotide sequences. Under the neutral Wright–Fisher model, the scaled mutation rate represents twice the average number of new mutations per generations and it is one of the key parameters in population genetics. In this study, we present various methods of estimation of this parameter, analytical studies of their asymptotic behavior as well as comparisons of the distribution's behavior of these estimators through simulations. As knowledge of the genealogy is needed to estimate the maximum likelihood estimator (MLE), an application with real data is also presented, using jackknife to correct the bias of the MLE, which can be generated by the estimation of the tree. We proved analytically that the Waterson's estimator and the MLE are asymptotically equivalent with the same rate of convergence to normality. Furthermore, we showed that the MLE has a better rate of convergence than Waterson's estimator for values of the parameter greater than one and this relationship is reversed when the parameter is less than one.  相似文献   

8.
It is well known that Cornfield 's confidence interval of the odds ratio with the continuity correction can mimic the performance of the exact method. Furthermore, because the calculation procedure of using the former is much simpler than that of using the latter, Cornfield 's confidence interval with the continuity correction is highly recommended by many publications. However, all these papers that draw this conclusion are on the basis of examining the coverage probability exclusively. The efficiency of the resulting confidence intervals is completely ignored. This paper calculates and compares the coverage probability and the average length for Woolf s logit interval estimator, Gart 's logit interval estimator of adding 0.50, Cornfield 's interval estimator with the continuity correction, and Cornfield 's interval estimator without the continuity correction in a variety of situations. This paper notes that Cornfield 's interval estimator with the continuity correction is too conservative, while Cornfield 's method without the continuity correction can improve efficiency without sacrificing the accuracy of the coverage probability. This paper further notes that when the sample size is small (say, 20 or 30 per group) and the probability of exposure in the control group is small (say, 0.10) or large (say, 0.90), using Cornfield 's method without the continuity correction is likely preferable to all the other estimators considered here. When the sample size is large (say, 100 per group) or when the probability of exposure in the control group is moderate (say, 0.50), Gart 's logit interval estimator is probably the best.  相似文献   

9.
The problem of estimating the population mean using an auxiliary information has been dealt with in literature quite extensively. Ratio, product, linear regression and ratio-type estimators are well known. A class of ratio-cum-product-type estimator is proposed in this paper. Its bias and variance to the first order of approximation are obtained. For an appropriate weight ‘a’ and good range of α-values, it is found that the proposed estimator is superior than a set of estimators (i.e., sample mean, usual ratio and product estimators, SRIVASTAVA's (1967) estimator, CHAKRABARTY's (1979) estimator and a product-type estimator) which are, in fact, the particular cases of it. At optimum value of α, the proposed estimator is as efficient as linear regression estimator.  相似文献   

10.
The Brownie tag‐recovery model is useful for estimating harvest rates but assumes all tagged individuals survive to the first hunting season; otherwise, mortality between time of tagging and the hunting season will cause the Brownie estimator to be negatively biased. Alternatively, fitting animals with radio transmitters can be used to accurately estimate harvest rate but may be more costly. We developed a joint model to estimate harvest and annual survival rates that combines known‐fate data from animals fitted with transmitters to estimate the probability of surviving the period from capture to the first hunting season, and data from reward‐tagged animals in a Brownie tag‐recovery model. We evaluated bias and precision of the joint estimator, and how to optimally allocate effort between animals fitted with radio transmitters and inexpensive ear tags or leg bands. Tagging‐to‐harvest survival rates from >20 individuals with radio transmitters combined with 50–100 reward tags resulted in an unbiased and precise estimator of harvest rates. In addition, the joint model can test whether transmitters affect an individual's probability of being harvested. We illustrate application of the model using data from wild turkey, Meleagris gallapavo, to estimate harvest rates, and data from white‐tailed deer, Odocoileus virginianus, to evaluate whether the presence of a visible radio transmitter is related to the probability of a deer being harvested. The joint known‐fate tag‐recovery model eliminates the requirement to capture and mark animals immediately prior to the hunting season to obtain accurate and precise estimates of harvest rate. In addition, the joint model can assess whether marking animals with radio transmitters affects the individual's probability of being harvested, caused by hunter selectivity or changes in a marked animal's behavior.  相似文献   

11.
A ratio type estimator using two auxiliary variates has been proposed and conditions are obtained to choose between proposed estimator and OLKIN'S (1958) estimator using two auxiliary variates.  相似文献   

12.
The problem of estimation of ratio of population proportions is considered and a difference-type estimator is proposed using auxiliary information. The bias and mean squared error of the proposed estimator is found and compared to the usual estimator and also to WYNN'S (1976) type estimator. An example is included for illustration.  相似文献   

13.
Taylor (1953) proposed a distance function in connection with the logit χ2 estimator. For product (associated) multinomial distributions, he showed that minimization of the distance function yields BAN estimators. Aithal (1986) and Rao (1989) considered a modified version of Taylor's distance function and showed that a member belonging to this class leads to a second order efficient estimator. In this paper we consider Taylor's distance function and show that a member belonging to this class produces a second order efficient estimator. In addition to the above two, the m.l. estimator is also second order efficient. In order to compare these three second order efficient estimators, the small sample variances of the estimators are estimated through a simulation study. The results indicate that the variance of the m.l. estimator is the smallest in most of the cases.  相似文献   

14.
A ratio type estimator using two auxiliary variates is suggested which is found to be more practicable than that of AGARWAL'S (1980) estimator.  相似文献   

15.
The paper considers methods for testing H0: β1 = … = βp = 0, where β1, … ,βp are the slope parameters in a linear regression model with an emphasis on p = 2. It is known that even when the usual error term is normal, but heteroscedastic, control over the probability of a type I error can be poor when using the conventional F test in conjunction with the least squares estimator. When the error term is nonnormal, the situation gets worse. Another practical problem is that power can be poor under even slight departures from normality. Liu and Singh (1997) describe a general bootstrap method for making inferences about parameters in a multivariate setting that is based on the general notion of depth. This paper studies the small-sample properties of their method when applied to the problem at hand. It is found that there is a practical advantage to using Tukey's depth versus the Mahalanobis depth when using a particular robust estimator. When using the ordinary least squares estimator, the method improves upon the conventional F test, but practical problems remain when the sample size is less than 60. In simulations, using Tukey's depth with the robust estimator gave the best results, in terms of type I errors, among the five methods studied.  相似文献   

16.
A nonparametric estimator of a joint distribution function F0 of a d‐dimensional random vector with interval‐censored (IC) data is the generalized maximum likelihood estimator (GMLE), where d ≥ 2. The GMLE of F0 with univariate IC data is uniquely defined at each follow‐up time. However, this is no longer true in general with multivariate IC data as demonstrated by a data set from an eye study. How to estimate the survival function and the covariance matrix of the estimator in such a case is a new practical issue in analyzing IC data. We propose a procedure in such a situation and apply it to the data set from the eye study. Our method always results in a GMLE with a nonsingular sample information matrix. We also give a theoretical justification for such a procedure. Extension of our procedure to Cox's regression model is also mentioned.  相似文献   

17.
In some cases model-based and model-assisted inferences canlead to very different estimators. These two paradigms are notso different if we search for an optimal strategy rather thanjust an optimal estimator, a strategy being a pair composedof a sampling design and an estimator. We show that, under alinear model, the optimal model-assisted strategy consists ofa balanced sampling design with inclusion probabilities thatare proportional to the standard deviations of the errors ofthe model and the Horvitz–Thompson estimator. If the heteroscedasticityof the model is 'fully explainable’ by the auxiliary variables,then this strategy is also optimal in a model-based sense. Moreover,under balanced sampling and with inclusion probabilities thatare proportional to the standard deviation of the model, thebest linear unbiased estimator and the Horvitz–Thompsonestimator are equal. Finally, it is possible to construct asingle estimator for both the design and model variance. Theinference can thus be valid under the sampling design and underthe model.  相似文献   

18.
Planar random figures can be described by series of landmarks. Bookstein's model assumes that the landmarks result from independent individual fluctuations around fixed fictive landmark centres. The aim of this paper is to estimate the distances between the centers and the variances of fluctuations around them. Furthermore, two new shape variable estimators are suggested and compared with an estimator of Bookstein.  相似文献   

19.
A sufficient condition that the variance of HORVITZ -THOMPSON estimator for RAO 's (1965) inclusion probability proportional to sizes sampling scheme of selecting two units is uniformly smaller than that of RAO , HARTLEY and COCHRAN (1962) estimator has been obtained.  相似文献   

20.
This paper proposes a modified randomization device for collecting information on sensitive issues. The estimator based on the suggested strategy is found to be unbiased for population proportion and is better than the Greenberg et. al.'s (1969) estimator. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号