首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Krishnamoorthy K  Lu Y 《Biometrics》2003,59(2):237-247
This article presents procedures for hypothesis testing and interval estimation of the common mean of several normal populations. The methods are based on the concepts of generalized p-value and generalized confidence limit. The merits of the proposed methods are evaluated numerically and compared with those of the existing methods. Numerical studies show that the new procedures are accurate and perform better than the existing methods when the sample sizes are moderate and the number of populations is four or less. If the number of populations is five or more, then the generalized variable method performs much better than the existing methods regardless of the sample sizes. The generalized variable method and other existing methods are illustrated using two examples.  相似文献   

2.
Zhou XH  Tu W 《Biometrics》2000,56(4):1118-1125
In this paper, we consider the problem of interval estimation for the mean of diagnostic test charges. Diagnostic test charge data may contain zero values, and the nonzero values can often be modeled by a log-normal distribution. Under such a model, we propose three different interval estimation procedures: a percentile-t bootstrap interval based on sufficient statistics and two likelihood-based confidence intervals. For theoretical properties, we show that the two likelihood-based one-sided confidence intervals are only first-order accurate and that the bootstrap-based one-sided confidence interval is second-order accurate. For two-sided confidence intervals, all three proposed methods are second-order accurate. A simulation study in finite-sample sizes suggests all three proposed intervals outperform a widely used minimum variance unbiased estimator (MVUE)-based interval except for the case of one-sided lower end-point intervals when the skewness is very small. Among the proposed one-sided intervals, the bootstrap interval has the best coverage accuracy. For the two-sided intervals, when the sample size is small, the bootstrap method still yields the best coverage accuracy unless the skewness is very small, in which case the bias-corrected ML method has the best accuracy. When the sample size is large, all three proposed intervals have similar coverage accuracy. Finally, we analyze with the proposed methods one real example assessing diagnostic test charges among older adults with depression.  相似文献   

3.
D Y Lin  L J Wei  D L DeMets 《Biometrics》1991,47(4):1399-1408
This paper considers clinical trials comparing two treatments with dichotomous responses where the data are examined periodically for early evidence of treatment difference. The existing group sequential methods for such trials are based on the large-sample normal approximation to the joint distribution of the estimators of treatment difference over interim analyses. We demonstrate through extensive numerical studies that, for small and even moderate-sized trials, these approximate procedures may lead to tests with supranominal size (mainly when unpooled estimators of variance are used) and confidence intervals with under-nominal coverage probability. We then study exact methods for group sequential testing, repeated interval estimation, and interval estimation following sequential testing. The new procedures can accommodate any treatment allocation rules. An example using real data is provided.  相似文献   

4.
Using visual estimation of species cover in ordinal interval classes may reduce costs in vegetation studies. In phytosociology, species cover within plots is usually estimated according to the well-known Braun-Blanquet scale and ordinal data from this scale are usually treated using common exploratory analysis tools that are adequate for ratio-scale variables only. This paper addresses whether the visual estimation of ordinal cover data and the treatment of these data with multivariate procedures tailored for ratio-scale data would lead to a significant loss of information with respect to the use of more accurate methods of data collection and analysis. To answer these questions we used three data sets sampled by different authors in different sites of Tuscany (central Italy) in which the species cover is measured with the point quadrat method. For each data set we used a Mantel test to compare the dissimilarity matrices obtained from the original point-quadrat cover data with those obtained from the corresponding ordinal interval classes. The results suggest that the ordinal data are suitable to represent the plot-to-plot dissimilarity structure of all data sets in a reasonable way and that in using such data there is no need to apply dissimilarity coefficients specifically tailored for ordinal scales.  相似文献   

5.
Ghosh D 《Biometrics》2003,59(3):721-726
In tumorigenicity experiments, a complication is that the time to event is generally not observed, so that the time to tumor is subject to interval censoring. One of the goals in these studies is to properly model the effect of dose on risk. Thus, it is important to have goodness of fit procedures available for assessing the model fit. While several estimation procedures have been developed for current-status data, relatively little work has been done on model-checking techniques. In this article, we propose numerical and graphical methods for the analysis of current-status data using the additive-risk model, primarily focusing on the situation where the monitoring times are dependent. The finite-sample properties of the proposed methodology are examined through numerical studies. The methods are then illustrated with data from a tumorigenicity experiment.  相似文献   

6.
This paper discusses two sample nonparametric comparison of survival functions when only interval‐censored failure time data are available. The problem considered often occurs in, for example, biological and medical studies such as medical follow‐up studies and clinical trials. For the problem, we present and study several nonparametric test procedures that include methods based on both absolute and squared survival differences as well as simple survival differences. The presented tests provide alternatives to existing methods, most of which are rank‐based tests and not sensitive to nonproportional or nonmonotone alternatives. Simulation studies are performed to evaluate and compare the proposed methods with existing methods and suggest that the proposed tests work well for nonmonotone alternatives as well as monotone alternatives. An illustrative example is presented.  相似文献   

7.
This paper focuses on the development and study of the confidence interval procedures for mean difference between two treatments in the analysis of over‐dispersed count data in order to measure the efficacy of the experimental treatment over the standard treatment in clinical trials. In this study, two simple methods are proposed. One is based on a sandwich estimator of the variance of the regression estimator using the generalized estimating equations (GEEs) approach of Zeger and Liang (1986) and the other is based on an estimator of the variance of a ratio estimator (1977). We also develop three other procedures following the procedures studied by Newcombe (1998) and the procedure studied by Beal (1987). As assessed by Monte Carlo simulations, all the procedures have reasonably well coverage properties. Moreover, the interval procedure based on GEEs outperforms other interval procedures in the sense that it maintains the coverage very close to the nominal coverage level and that it has the shortest interval length, a satisfactory location property, and a very simple form, which can be easily implemented in the applied fields. Illustrative applications in the biological studies for these confidence interval procedures are also presented.  相似文献   

8.
Independence of locational fixes, to reduce the effects of autocorrelation, is often deemed a prerequisite for estimation of home range size and utilization when using data derived from telemetric studies. Three methods of estimating times to independence using movement estimates, along with a statistical method of estimating the level of autocorrelation of locational data, were examined for two species of mammal. Attempts to achieve statistically independent data by subsampling resulted in severe redundancy in the data and significant underestimation of range size and rates of movement. Even a sample interval of one fix per week did not guarantee independence and also resulted in underestimation of range size despite range asymptotes being reached. It would appear that the correct strategy for the best possible estimation of range size and use from telemetry would be the repeated use of as short a sampling interval as is possible over an extended period of time. Statistical methods to measure levels of autocorrelation in locational data may be useful for comparing rates of range use between different populations of the same species or between species, as long as the same sample interval is used.  相似文献   

9.
The risk difference is an intelligible measure for comparing disease incidence in two exposure or treatment groups. Despite its convenience in interpretation, it is less prevalent in epidemiological and clinical areas where regression models are required in order to adjust for confounding. One major barrier to its popularity is that standard linear binomial or Poisson regression models can provide estimated probabilities out of the range of (0,1), resulting in possible convergence issues. For estimating adjusted risk differences, we propose a general framework covering various constraint approaches based on binomial and Poisson regression models. The proposed methods span the areas of ordinary least squares, maximum likelihood estimation, and Bayesian inference. Compared to existing approaches, our methods prevent estimates and confidence intervals of predicted probabilities from falling out of the valid range. Through extensive simulation studies, we demonstrate that the proposed methods solve the issue of having estimates or confidence limits of predicted probabilities out of (0,1), while offering performance comparable to its alternative in terms of the bias, variability, and coverage rates in point and interval estimation of the risk difference. An application study is performed using data from the Prospective Registry Evaluating Myocardial Infarction: Event and Recovery (PREMIER) study.  相似文献   

10.
Multistate models can be successfully used for describing complex event history data, for example, describing stages in the disease progression of a patient. The so‐called “illness‐death” model plays a central role in the theory and practice of these models. Many time‐to‐event datasets from medical studies with multiple end points can be reduced to this generic structure. In these models one important goal is the modeling of transition rates but biomedical researchers are also interested in reporting interpretable results in a simple and summarized manner. These include estimates of predictive probabilities, such as the transition probabilities, occupation probabilities, cumulative incidence functions, and the sojourn time distributions. We will give a review of some of the available methods for estimating such quantities in the progressive illness‐death model conditionally (or not) on covariate measures. For some of these quantities estimators based on subsampling are employed. Subsampling, also referred to as landmarking, leads to small sample sizes and usually to heavily censored data leading to estimators with higher variability. To overcome this issue estimators based on a preliminary estimation (presmoothing) of the probability of censoring may be used. Among these, the presmoothed estimators for the cumulative incidences are new. We also introduce feasible estimation methods for the cumulative incidence function conditionally on covariate measures. The proposed methods are illustrated using real data. A comparative simulation study of several estimation approaches is performed and existing software in the form of R packages is discussed.  相似文献   

11.
When the underlying responses are discrete, the interval estimation of the intraclass correlation derived from the normality assumption is not strictly valid for use. This paper focuses the interval estimation on the intraclass correlation under the negative binomial distribution, that has been commonly applied in epidemiological or consumer purchasing behaviour studies. This paper develops two simple asymptotic interval estimation procedures in closed forms for the intraclass correlation. To evaluate the performance of these procedures, a Monte Carlo simulation is carried out for a variety of situations. An example about consumer purchasing behaviors is also included to illustrate the use of the two proposed interval estimation procedures.  相似文献   

12.
Outcome misclassification occurs frequently in binary-outcome studies and can result in biased estimation of quantities such as the incidence, prevalence, cause-specific hazards, cumulative incidence functions, and so forth. A number of remedies have been proposed to address the potential misclassification of the outcomes in such data. The majority of these remedies lie in the estimation of misclassification probabilities, which are in turn used to adjust analyses for outcome misclassification. A number of authors advocate using a gold-standard procedure on a sample internal to the study to learn about the extent of the misclassification. With this type of internal validation, the problem of quantifying the misclassification also becomes a missing data problem as, by design, the true outcomes are only ascertained on a subset of the entire study sample. Although, the process of estimating misclassification probabilities appears simple conceptually, the estimation methods proposed so far have several methodological and practical shortcomings. Most methods rely on missing outcome data to be missing completely at random (MCAR), a rather stringent assumption which is unlikely to hold in practice. Some of the existing methods also tend to be computationally-intensive. To address these issues, we propose a computationally-efficient, easy-to-implement, pseudo-likelihood estimator of the misclassification probabilities under a missing at random (MAR) assumption, in studies with an available internal-validation sample. We present the estimator through the lens of studies with competing-risks outcomes, though the estimator extends beyond this setting. We describe the consistency and asymptotic distributional properties of the resulting estimator, and derive a closed-form estimator of its variance. The finite-sample performance of this estimator is evaluated via simulations. Using data from a real-world study with competing-risks outcomes, we illustrate how the proposed method can be used to estimate misclassification probabilities. We also show how the estimated misclassification probabilities can be used in an external study to adjust for possible misclassification bias when modeling cumulative incidence functions.  相似文献   

13.
In studies that require long-term and/or costly follow-up of participants to evaluate a treatment, there is often interest in identifying and using a surrogate marker to evaluate the treatment effect. While several statistical methods have been proposed to evaluate potential surrogate markers, available methods generally do not account for or address the potential for a surrogate to vary in utility or strength by patient characteristics. Previous work examining surrogate markers has indicated that there may be such heterogeneity, that is, that a surrogate marker may be useful (with respect to capturing the treatment effect on the primary outcome) for some subgroups, but not for others. This heterogeneity is important to understand, particularly if the surrogate is to be used in a future trial to replace the primary outcome. In this paper, we propose an approach and estimation procedures to measure the surrogate strength as a function of a baseline covariate W and thus examine potential heterogeneity in the utility of the surrogate marker with respect to W. Within a potential outcome framework, we quantify the surrogate strength/utility using the proportion of treatment effect on the primary outcome that is explained by the treatment effect on the surrogate. We propose testing procedures to test for evidence of heterogeneity, examine finite sample performance of these methods via simulation, and illustrate the methods using AIDS clinical trial data.  相似文献   

14.
In estimating mutation rates using the Luria-Delbrück experimental protocol, it is often assumed that all mutant cells survive the plating procedure to form visible mutant colonies. This assumption of perfect plating efficiency may not hold in certain circumstances, but none of the existing estimation methods that adjust for plating efficiency is strictly based on the likelihood principle. To ameliorate this situation, we propose likelihood based algorithms for computing point and interval estimates of mutation rates.  相似文献   

15.
Li R  Nie L 《Biometrics》2008,64(3):904-911
Summary .   Motivated by an analysis of a real data set in ecology, we consider a class of partially nonlinear models where both a nonparametric component and a parametric component are present. We develop two new estimation procedures to estimate the parameters in the parametric component. Consistency and asymptotic normality of the resulting estimators are established. We further propose an estimation procedure and a generalized F -test procedure for the nonparametric component in the partially nonlinear models. Asymptotic properties of the newly proposed estimation procedure and the test statistic are derived. Finite sample performance of the proposed inference procedures are assessed by Monte Carlo simulation studies. An application in ecology is used to illustrate the proposed methods.  相似文献   

16.
Tang NS  Tang ML 《Biometrics》2002,58(4):972-980
In this article, we consider small-sample statistical inference for rate ratio (RR) in a correlated 2 x 2 table with a structural zero in one of the off-diagonal cells. Existing Wald's test statistic and logarithmic transformation test statistic will be adopted for this purpose. Hypothesis testing and confidence interval construction based on large-sample theory will be reviewed first. We then propose reliable small-sample exact unconditional procedures for hypothesis testing and confidence interval construction. We present empirical results to evince the better confidence interval performance of our proposed exact unconditional procedures over the traditional large-sample procedures in small-sample designs. Unlike the findings given in Lui (1998, Biometrics 54, 706-711), our empirical studies show that the existing asymptotic procedures may not attain a prespecified confidence level even in moderate sample-size designs (e.g., n = 50). Our exact unconditional procedures on the other hand do not suffer from this problem. Hence, the asymptotic procedures should be applied with caution. We propose two approximate unconditional confidence interval construction methods that outperform the existing asymptotic ones in terms of coverage probability and expected interval width. Also, we empirically demonstrate that the approximate unconditional tests are more powerful than their associated exact unconditional tests. A real data set from a two-step tuberculosis testing study is used to illustrate the methodologies.  相似文献   

17.
This paper discusses two‐sample comparison in the case of interval‐censored failure time data. For the problem, one common approach is to employ some nonparametric test procedures, which usually give some p‐values but not a direct or exact quantitative measure of the survival or treatment difference of interest. In particular, these procedures cannot provide a hazard ratio estimate, which is commonly used to measure the difference between the two treatments or samples. For interval‐censored data, a few nonparametric test procedures have been developed, but it does not seem to exist as a procedure for hazard ratio estimation. Corresponding to this, we present two procedures for nonparametric estimation of the hazard ratio of the two samples for interval‐censored data situations. They are generalizations of the corresponding procedures for right‐censored failure time data. An extensive simulation study is conducted to evaluate the performance of the two procedures and indicates that they work reasonably well in practice. For illustration, they are applied to a set of interval‐censored data arising from a breast cancer study.  相似文献   

18.
MOTIVATION: High-dimensional data such as microarrays have created new challenges to traditional statistical methods. One such example is on class prediction with high-dimension, low-sample size data. Due to the small sample size, the sample mean estimates are usually unreliable. As a consequence, the performance of the class prediction methods using the sample mean may also be unsatisfactory. To obtain more accurate estimation of parameters some statistical methods, such as regularizations through shrinkage, are often desired. RESULTS: In this article, we investigate the family of shrinkage estimators for the mean value under the quadratic loss function. The optimal shrinkage parameter is proposed under the scenario when the sample size is fixed and the dimension is large. We then construct a shrinkage-based diagonal discriminant rule by replacing the sample mean by the proposed shrinkage mean. Finally, we demonstrate via simulation studies and real data analysis that the proposed shrinkage-based rule outperforms its original competitor in a wide range of settings.  相似文献   

19.
Ghosh D  Taylor JM  Sargent DJ 《Biometrics》2012,68(1):226-232
There has been great recent interest in the medical and statistical literature in the assessment and validation of surrogate endpoints as proxies for clinical endpoints in medical studies. More recently, authors have focused on using metaanalytical methods for quantification of surrogacy. In this article, we extend existing procedures for analysis based on the accelerated failure time model to this setting. An advantage of this approach relative to proportional hazards model is that it allows for analysis in the semicompeting risks setting, where we model the region where the surrogate endpoint occurs before the true endpoint. Several estimation methods and attendant inferential procedures are presented. In addition, between- and within-trial methods for evaluating surrogacy are developed; a novel principal components procedure is developed for quantifying trial-level surrogacy. The methods are illustrated by application to data from several studies in colorectal cancer.  相似文献   

20.
Layla Parast  Tianxi Cai  Lu Tian 《Biometrics》2019,75(4):1253-1263
The development of methods to identify, validate, and use surrogate markers to test for a treatment effect has been an area of intense research interest given the potential for valid surrogate markers to reduce the required costs and follow‐up times of future studies. Several quantities and procedures have been proposed to assess the utility of a surrogate marker. However, few methods have been proposed to address how one might use the surrogate marker information to test for a treatment effect at an earlier time point, especially in settings where the primary outcome and the surrogate marker are subject to censoring. In this paper, we propose a novel test statistic to test for a treatment effect using surrogate marker information measured prior to the end of the study in a time‐to‐event outcome setting. We propose a robust nonparametric estimation procedure and propose inference procedures. In addition, we evaluate the power for the design of a future study based on surrogate marker information. We illustrate the proposed procedure and relative power of the proposed test compared to a test performed at the end of the study using simulation studies and an application to data from the Diabetes Prevention Program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号