首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In randomized trials, the treatment influences not only endpoints but also other variables measured after randomization which, when used as covariates to adjust for the observed imbalance, become pseudo‐covariates. There is a logical circularity in adjusting for a pseudo‐covariate because the variability in the endpoint that is attributed not to the treatment but rather to the pseudo‐covariate may actually represent an effect of the treatment modulated by the pseudo‐covariate. This potential bias is well known, but we offer new insight into how it can lead to reversals in the direction of the apparent treatment effect by way of stage migration. We then discuss a related problem that is not generally appreciated, specifically how the absence of allocation concealment can lead to this reversal of the direction of the apparent treatment effect even when adjustment is for a true covariate measured prior to randomization. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

2.
Inference after two‐stage single‐arm designs with binary endpoint is challenging due to the nonunique ordering of the sampling space in multistage designs. We illustrate the problem of specifying test‐compatible confidence intervals for designs with nonconstant second‐stage sample size and present two approaches that guarantee confidence intervals consistent with the test decision. Firstly, we extend the well‐known Clopper–Pearson approach of inverting a family of two‐sided hypothesis tests from the group‐sequential case to designs with fully adaptive sample size. Test compatibility is achieved by using a sample space ordering that is derived from a test‐compatible estimator. The resulting confidence intervals tend to be conservative but assure the nominal coverage probability. In order to assess the possibility of further improving these confidence intervals, we pursue a direct optimization approach minimizing the mean width of the confidence intervals. While the latter approach produces more stable coverage probabilities, it is also slightly anti‐conservative and yields only negligible improvements in mean width. We conclude that the Clopper–Pearson‐type confidence intervals based on a test‐compatible estimator are the best choice if the nominal coverage probability is not to be undershot and compatibility of test decision and confidence interval is to be preserved.  相似文献   

3.
Understanding the way stimulus properties are encoded in the nerve cell responses of sensory organs is one of the fundamental scientific questions in neurosciences. Different neuronal coding hypotheses can be compared by use of an inverse procedure called stimulus reconstruction. Here, based on different attributes of experimentally recorded neuronal responses, the values of certain stimulus properties are estimated by statistical classification methods. Comparison of stimulus reconstruction results then allows to draw conclusions about relative importance of covariate features. Since many stimulus properties have a natural order and can therefore be considered as ordinal, we introduce a bivariate ordinal probit model to obtain classifications for the combination of light intensity and velocity of a visual dot pattern based on different covariates extracted from recorded spike trains. For parameter estimation, we develop a Bayesian Gibbs sampler and incorporate penalized splines to model nonlinear effects. We compare the classification performance of different individual cell covariates and simple features of groups of neurons and find that the combination of at least two covariates increases the classification performance significantly. Furthermore, we obtain a non‐linear effect for the first spike latency. The model is compared to a naïve Bayesian stimulus estimation method where it yields comparable misclassification rates for the given dataset. Hence, the bivariate ordinal probit model is shown to be a helpful tool for stimulus reconstruction particularly thanks to its flexibility with respect to the number of covariates as well as their scale and effect type.  相似文献   

4.
Subgroup analyses are important to medical research because they shed light on the heterogeneity of treatment effectts. A treatment–covariate interaction in an individual patient data (IPD) meta‐analysis is the most reliable means to estimate how a subgroup factor modifies a treatment's effectiveness. However, owing to the challenges in collecting participant data, an approach based on aggregate data might be the only option. In these circumstances, it would be useful to assess the relative efficiency and power loss of a subgroup analysis without patient‐level data. We present methods that use aggregate data to estimate the standard error of an IPD meta‐analysis’ treatment–covariate interaction for regression models of a continuous or dichotomous patient outcome. Numerical studies indicate that the estimators have good accuracy. An application to a previously published meta‐regression illustrates the practical utility of the methodology.  相似文献   

5.
We propose a parametric regression model for the cumulative incidence functions (CIFs) commonly used for competing risks data. The model adopts a modified logistic model as the baseline CIF and a generalized odds‐rate model for covariate effects, and it explicitly takes into account the constraint that a subject with any given prognostic factors should eventually fail from one of the causes such that the asymptotes of the CIFs should add up to one. This constraint intrinsically holds in a nonparametric analysis without covariates, but is easily overlooked in a semiparametric or parametric regression setting. We hence model the CIF from the primary cause assuming the generalized odds‐rate transformation and the modified logistic function as the baseline CIF. Under the additivity constraint, the covariate effects on the competing cause are modeled by a function of the asymptote of the baseline distribution and the covariate effects on the primary cause. The inference procedure is straightforward by using the standard maximum likelihood theory. We demonstrate desirable finite‐sample performance of our model by simulation studies in comparison with existing methods. Its practical utility is illustrated in an analysis of a breast cancer dataset to assess the treatment effect of tamoxifen, adjusting for age and initial pathological tumor size, on breast cancer recurrence that is subject to dependent censoring by second primary cancers and deaths.  相似文献   

6.
Spatial autocorrelation in biology 1. Methodology   总被引:25,自引:0,他引:25  
Spatial autocorrelation analysis tests whether the observed value of a nominal, ordinal, or interval variable at one locality is independent of values of the variable at neighbouring localities. The computation of autocorrelation coefficients for nominal, ordinal, and for interval data is illustrated, together with appropriate significance tests. The method is extended to include the computation of correlograms for spatial autocorrelation. These show the autocorrelation coefficient as a function of distance between pairs of localities being considered, and summarize the patterns of geographic variation exhibited by the response surface of any given variable.
Autocorrelation analysis is applied to microgeographic variation of allozyme frequencies in the snail Helix aspersa. Differences in variational patterns in two city blocks are interpreted.
The inferences that can be drawn from correlograms are discussed and illustrated with the aid of some artificially generated patterns. Computational formulae, expected values and standard errors are furnished in two appendices.  相似文献   

7.
A disease severity index (DSI) is a single number for summarising a large amount of information on disease severity. The DSI has most often been used with data based on a special type of ordinal scale comprising a series of consecutive ranges of defined numeric intervals, generally based on the percent area of symptoms presenting on the specimen(s). Plant pathologists and other professionals use such ordinal scale data in conjunction with a DSI (%) for treatment comparisons. The objective of this work is to explore the effects on both of different scales (i.e. those having equal or unequal classes, or different widths of intervals) and of the selection of values for scale intervals (i.e. the ordinal grade for the category or the midpoint value of the interval) on the null hypothesis test for the treatment comparison. A two‐stage simulation approach was employed to approximate the real mechanisms governing the disease‐severity sampling design. Subsequently, a meta‐analysis was performed to compare the effects of two treatments, which demonstrated that using quantitative ordinal rating grades or the midpoint conversion for the ranges of disease severity yielded very comparable results with respect to the power of hypothesis testing. However, the principal factor determining the power of the hypothesis test is the nature of the intervals, not the selection of values for ordinal scale intervals (i.e. not the mid‐point or ordinal grade). Although using the percent scale is always preferable, the results of this study provide a framework for developing improved research methods where the use of ordinal scales in conjunction with a DSI is either preferred or a necessity for comparing disease severities.  相似文献   

8.
In this paper, we focus on measures to evaluate discrimination of prediction models for ordinal outcomes. We review existing extensions of the dichotomous c‐index—which is equivalent to the area under the receiver operating characteristic (ROC) curve—suggest a new measure, and study their relationships. The volume under the ROC surface (VUS) scores sets of cases including one case from each outcome category. VUS considers sets as either correctly or incorrectly ordered by the model. All other existing measures assess pairs of cases. We propose an ordinal c‐index (ORC) that is set‐based but, contrary to VUS, scores sets more gradually by indicating the closeness of the model‐based ordering to the perfect ordering. As a result, the ORC does not decrease rapidly as the number of outcome categories increases. It turns out that the ORC can be rewritten as the average of pairwise c‐indexes. Hence, the ORC has both a set‐ and pair‐based interpretation. There are several relationships between the existing measures, leading to only two types of existing measures: a prevalence‐weighted average of pairwise c‐indexes and the VUS. Our suggested measure ORC positions itself in between as it is set‐based but turns out to equal an unweighted average of pairwise c‐indexes. The measures are demonstrated through a case study on the prediction of six‐month outcome after traumatic brain injury. In conclusion, the set‐based nature and graded scoring system make the ORC an attractive measure with a simple interpretation, together with its prevalence‐independence that is a natural property of a discrimination measure.  相似文献   

9.
Population composition is often estimated by double sampling in which the value of a covariate is noted on each of a large number of randomly selected units and the value of the covariate and the exact class to which the unit belongs is noted for a smaller sample. The cross‐classified sample can be used to estimate the classification rates and these, in turn, can be used in conjunction with the estimated distribution of the covariate to obtain an improved estimate of the population composition over that obtained by direct observation of the identity of the individuals in a small sample. There are two approaches to this problem characterized by the way in which the classification rates are defined. The simplest approach uses estimates of the probability P(i | j) that the unit is actually in class i given that the covariate is in class j. The more complicated approach uses estimates of the probability Pi | j) that the covariate falls in class j given that the unit is actually in class i. The latter approach involves estimating more parameters than the former but avoids the necessity for the two samples to be drawn from the same population. We show the two approaches can be combined when there are multiple surveys. For example, one might conduct a disease survey for several years; in each year the accurate and/or error‐prone techniques may be applied to samples. The sensitivities and specificities of the error‐prone test are assumed constant across surveys. Generalizations allow for more than one error‐prone classifier and partial verification (estimation of misclassification rates by application of the accurate technique to fixed subsamples from each error‐prone category). The general approach is illustrated by considering a repeated survey for malaria.  相似文献   

10.
Fully Bayesian methods for Cox models specify a model for the baseline hazard function. Parametric approaches generally provide monotone estimations. Semi‐parametric choices allow for more flexible patterns but they can suffer from overfitting and instability. Regularization methods through prior distributions with correlated structures usually give reasonable answers to these types of situations. We discuss Bayesian regularization for Cox survival models defined via flexible baseline hazards specified by a mixture of piecewise constant functions and by a cubic B‐spline function. For those “semi‐parametric” proposals, different prior scenarios ranging from prior independence to particular correlated structures are discussed in a real study with microvirulence data and in an extensive simulation scenario that includes different data sample and time axis partition sizes in order to capture risk variations. The posterior distribution of the parameters was approximated using Markov chain Monte Carlo methods. Model selection was performed in accordance with the deviance information criteria and the log pseudo‐marginal likelihood. The results obtained reveal that, in general, Cox models present great robustness in covariate effects and survival estimates independent of the baseline hazard specification. In relation to the “semi‐parametric” baseline hazard specification, the B‐splines hazard function is less dependent on the regularization process than the piecewise specification because it demands a smaller time axis partition to estimate a similar behavior of the risk.  相似文献   

11.
Recurrent events data are common in experimental and observational studies. It is often of interest to estimate the effect of an intervention on the incidence rate of the recurrent events. The incidence rate difference is a useful measure of intervention effect. A weighted least squares estimator of the incidence rate difference for recurrent events was recently proposed for an additive rate model in which both the baseline incidence rate and the covariate effects were constant over time. In this article, we relax this model assumption and examine the properties of the estimator under the additive and multiplicative rate models assumption in which the baseline incidence rate and covariate effects may vary over time. We show analytically and numerically that the estimator gives an appropriate summary measure of the time‐varying covariate effects. In particular, when the underlying covariate effects are additive and time‐varying, the estimator consistently estimates the weighted average of the covariate effects over time. When the underlying covariate effects are multiplicative and time‐varying, and if there is only one binary covariate indicating the intervention status, the estimator consistently estimates the weighted average of the underlying incidence rate difference between the intervention and control groups over time. We illustrate the method with data from a randomized vaccine trial.  相似文献   

12.
SUMMARY: We consider two-armed clinical trials in which the response and/or the covariates are observed on either a binary, ordinal, or continuous scale. A new general nonparametric (NP) approach for covariate adjustment is presented using the notion of a relative effect to describe treatment effects. The relative effect is defined by the probability of observing a higher response in the experimental than in the control arm. The notion is invariant under monotone transformations of the data and is therefore especially suitable for ordinal data. For a normal or binary distributed response the relative effect is the transformed effect size or the difference of response probability, respectively. An unbiased and consistent NP estimator for the relative effect is presented. Further, we suggest a NP procedure for correcting the relative effect for covariate imbalance and random covariate imbalance, yielding a consistent estimator for the adjusted relative effect. Asymptotic theory has been developed to derive test statistics and confidence intervals. The test statistic is based on the joint behavior of the estimated relative effect for the response and the covariates. It is shown that the test statistic can be used to evaluate the treatment effect in the presence of (random) covariate imbalance. Approximations for small sample sizes are considered as well. The sampling behavior of the estimator of the adjusted relative effect is examined. We also compare the probability of a type I error and the power of our approach to standard covariate adjustment methods by means of a simulation study. Finally, our approach is illustrated on three studies involving ordinal responses and covariates.  相似文献   

13.
Kaifeng Lu 《Biometrics》2010,66(3):891-896
Summary : In randomized clinical trials, measurements are often collected on each subject at a baseline visit and several post‐randomization time points. The longitudinal analysis of covariance in which the postbaseline values form the response vector and the baseline value is treated as a covariate can be used to evaluate the treatment differences at the postbaseline time points. Liang and Zeger (2000, Sankhyā: The Indian Journal of Statistics, Series B 62, 134–148) propose a constrained longitudinal data analysis in which the baseline value is included in the response vector together with the postbaseline values and a constraint of a common baseline mean across treatment groups is imposed on the model as a result of randomization. If the baseline value is subject to missingness, the constrained longitudinal data analysis is shown to be more efficient for estimating the treatment differences at postbaseline time points than the longitudinal analysis of covariance. The efficiency gain increases with the number of subjects missing baseline and the number of subjects missing all postbaseline values, and, for the pre–post design, decreases with the absolute correlation between baseline and postbaseline values.  相似文献   

14.
Directly standardized rates continue to be an integral tool for presenting rates for diseases that are highly dependent on age, such as cancer. Statistically, these rates are modeled as a weighted sum of Poisson random variables. This is a difficult statistical problem, because there are k observed Poisson variables and k unknown means. The gamma confidence interval has been shown through simulations to have at least nominal coverage in all simulated scenarios, but it can be overly conservative. Previous modifications to that method have closer to nominal coverage on average, but they do not achieve the nominal coverage bound in all situations. Further, those modifications are not central intervals, and the upper coverage error rate can be substantially more than half the nominal error. Here we apply a mid‐p modification to the gamma confidence interval. Typical mid‐p methods forsake guaranteed coverage to get coverage that is sometimes higher and sometimes lower than the nominal coverage rate, depending on the values of the parameters. The mid‐p gamma interval does not have guaranteed coverage in all situations; however, in the (not rare) situations where the gamma method is overly conservative, the mid‐p gamma interval often has at least nominal coverage. The mid‐p gamma interval is especially appropriate when one wants a central interval, since simulations show that in many situations both the upper and lower coverage error rates are on average less than or equal to half the nominal error rate.  相似文献   

15.
P F Thall 《Biometrics》1988,44(1):197-209
In many longitudinal studies it is desired to estimate and test the rate over time of a particular recurrent event. Often only the event counts corresponding to the elapsed time intervals between each subject's successive observation times, and baseline covariate data, are available. The intervals may vary substantially in length and number between subjects, so that the corresponding vectors of counts are not directly comparable. A family of Poisson likelihood regression models incorporating a mixed random multiplicative component in the rate function of each subject is proposed for this longitudinal data structure. A related empirical Bayes estimate of random-effect parameters is also described. These methods are illustrated by an analysis of dyspepsia data from the National Cooperative Gallstone Study.  相似文献   

16.
In the field of pharmaceutical drug development, there have been extensive discussions on the establishment of statistically significant results that demonstrate the efficacy of a new treatment with multiple co‐primary endpoints. When designing a clinical trial with such multiple co‐primary endpoints, it is critical to determine the appropriate sample size for indicating the statistical significance of all the co‐primary endpoints with preserving the desired overall power because the type II error rate increases with the number of co‐primary endpoints. We consider overall power functions and sample size determinations with multiple co‐primary endpoints that consist of mixed continuous and binary variables, and provide numerical examples to illustrate the behavior of the overall power functions and sample sizes. In formulating the problem, we assume that response variables follow a multivariate normal distribution, where binary variables are observed in a dichotomized normal distribution with a certain point of dichotomy. Numerical examples show that the sample size decreases as the correlation increases when the individual powers of each endpoint are approximately and mutually equal.  相似文献   

17.
18.
When a new treatment is compared to an established one in a randomized clinical trial, it is standard practice to statistically test for non-inferiority rather than for superiority. When the endpoint is binary, one usually compares two treatments using either an odds-ratio or a difference of proportions. In this paper, we propose a mixed approach which uses both concepts. One first defines the non-inferiority margin using an odds-ratio and one ultimately proves non-inferiority statistically using a difference of proportions. The mixed approach is shown to be more powerful than the conventional odds-ratio approach when the efficacy of the established treatment is known (with good precision) and high (e.g. with more than 56% of success). The gain of power achieved may lead in turn to a substantial reduction in the sample size needed to prove non-inferiority. The mixed approach can be generalized to ordinal endpoints.  相似文献   

19.
Dong B  Matthews DE 《Biometrics》2012,68(2):408-418
In medical studies, it is often of scientific interest to evaluate the treatment effect via the ratio of cumulative hazards, especially when those hazards may be nonproportional. To deal with nonproportionality in the Cox regression model, investigators usually assume that the treatment effect has some functional form. However, to do so may create a model misspecification problem because it is generally difficult to justify the specific parametric form chosen for the treatment effect. In this article, we employ empirical likelihood (EL) to develop a nonparametric estimator of the cumulative hazard ratio with covariate adjustment under two nonproportional hazard models, one that is stratified, as well as a less restrictive framework involving group-specific treatment adjustment. The asymptotic properties of the EL ratio statistic are derived in each situation and the finite-sample properties of EL-based estimators are assessed via simulation studies. Simultaneous confidence bands for all values of the adjusted cumulative hazard ratio in a fixed interval of interest are also developed. The proposed methods are illustrated using two different datasets concerning the survival experience of patients with non-Hodgkin's lymphoma or ovarian cancer.  相似文献   

20.
A nonparametric model for the multivariate one‐way design is discussed which entails continuous as well as discontinuous distributions and, therefore, allows for ordinal data. Nonparametric hypotheses are formulated by the normalized version of the marginal distribution functions as well as the common distribution functions. The differences between the distribution functions are described by means of the so‐called relative treatment effects, for which unbiased and consistent estimators are derived. The asymptotic distribution of the vector of the effect estimators is derived and under the marignal hypothesis a consistent estimator for the asymptotic covariance matrix is given. Nonparametric versions of the Wald‐type statistic, the ANOVA‐type statistic and the Lawley‐Hotelling statistic are considered and compared by means of a simulation study. Finally, these tests are applied to a psychiatric clinical trial.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号