首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The mixed-model factorial analysis of variance has been used in many recent studies in evolutionary quantitative genetics. Two competing formulations of the mixed-model ANOVA are commonly used, the “Scheffe” model and the “SAS” model; these models differ in both their assumptions and in the way in which variance components due to the main effect of random factors are defined. The biological meanings of the two variance component definitions have often been unappreciated, however. A full understanding of these meanings leads to the conclusion that the mixed-model ANOVA could have been used to much greater effect by many recent authors. The variance component due to the random main effect under the two-way SAS model is the covariance in true means associated with a level of the random factor (e.g., families) across levels of the fixed factor (e.g., environments). Therefore the SAS model has a natural application for estimating the genetic correlation between a character expressed in different environments and testing whether it differs from zero. The variance component due to the random main effect under the two-way Scheffe model is the variance in marginal means (i.e., means over levels of the fixed factor) among levels of the random factor. Therefore the Scheffe model has a natural application for estimating genetic variances and heritabilities in populations using a defined mixture of environments. Procedures and assumptions necessary for these applications of the models are discussed. While exact significance tests under the SAS model require balanced data and the assumptions that family effects are normally distributed with equal variances in the different environments, the model can be useful even when these conditions are not met (e.g., for providing an unbiased estimate of the across-environment genetic covariance). Contrary to statements in a recent paper, exact significance tests regarding the variance in marginal means as well as unbiased estimates can be readily obtained from unbalanced designs with no restrictive assumptions about the distributions or variance-covariance structure of family effects.  相似文献   

2.
Ecologists often need to estimate components of spatial or temporal variation. The most widely used method in ecology uses the observed and expected mean squares in an analysis of variance. A more general approach, which can be used for balanced and unbalanced designs, is based on residual maximal likelihood (REML). This method is less well known by ecologists and requires specialist software. If the design is balanced, the two methods are equivalent, except for one important respect: estimates from analysis of variance can be negative whereas REML estimates cannot. The purpose of this note is to point out a simple modification to the analyses of variance which yields the same estimates as REML for many of the designs commonly used in ecological studies. This modification has been available in the mathematical literature for over 30 years, but appears not to be well known amongst ecologists. It is useful in many cases of balanced analytical designs.  相似文献   

3.
Randomized trials with continuous outcomes are often analyzed using analysis of covariance (ANCOVA), with adjustment for prognostic baseline covariates. The ANCOVA estimator of the treatment effect is consistent under arbitrary model misspecification. In an article recently published in the journal, Wang et al proved the model-based variance estimator for the treatment effect is also consistent under outcome model misspecification, assuming the probability of randomization to each treatment is 1/2. In this reader reaction, we derive explicit expressions which show that when randomization is unequal, the model-based variance estimator can be biased upwards or downwards. In contrast, robust sandwich variance estimators can provide asymptotically valid inferences under arbitrary misspecification, even when randomization probabilities are not equal.  相似文献   

4.
The determination of the sample size required by a crossover trial typically depends on the specification of one or more variance components. Uncertainty about the value of these parameters at the design stage means that there is often a risk a trial may be under‐ or overpowered. For many study designs, this problem has been addressed by considering adaptive design methodology that allows for the re‐estimation of the required sample size during a trial. Here, we propose and compare several approaches for this in multitreatment crossover trials. Specifically, regulators favor reestimation procedures to maintain the blinding of the treatment allocations. We therefore develop blinded estimators for the within and between person variances, following simple or block randomization. We demonstrate that, provided an equal number of patients are allocated to sequences that are balanced for period, the proposed estimators following block randomization are unbiased. We further provide a formula for the bias of the estimators following simple randomization. The performance of these procedures, along with that of an unblinded approach, is then examined utilizing three motivating examples, including one based on a recently completed four‐treatment four‐period crossover trial. Simulation results show that the performance of the proposed blinded procedures is in many cases similar to that of the unblinded approach, and thus they are an attractive alternative.  相似文献   

5.
C B Begg  L A Kalish 《Biometrics》1984,40(2):409-420
Many clinical trials have a binary outcome variable. If covariate adjustment is necessary in the analysis, the logistic-regression model is frequently used. Optimal designs for allocating treatments for this model, or for any nonlinear or heteroscedastic model, are generally unbalanced with regard to overall treatment totals and totals within strata. However, all treatment-allocation methods that have been recommended for clinical trials in the literature are designed to balance treatments within strata, either directly or asymptotically. In this paper, the efficiencies of balanced sequential allocation schemes are measured relative to sequential Ds-optimal designs for the logistic model, using as examples completed trials conducted by the Eastern Cooperative Oncology Group and systematic simulations. The results demonstrate that stratified, balanced designs are quite efficient, in general. However, complete randomization is frequently inefficient, and will occasionally result in a trial that is very inefficient.  相似文献   

6.
Optimal multivariate matching before randomization   总被引:1,自引:0,他引:1  
Although blocking or pairing before randomization is a basic principle of experimental design, the principle is almost invariably applied to at most one or two blocking variables. Here, we discuss the use of optimal multivariate matching prior to randomization to improve covariate balance for many variables at the same time, presenting an algorithm and a case-study of its performance. The method is useful when all subjects, or large groups of subjects, are randomized at the same time. Optimal matching divides a single group of 2n subjects into n pairs to minimize covariate differences within pairs-the so-called nonbipartite matching problem-then one subject in each pair is picked at random for treatment, the other being assigned to control. Using the baseline covariate data for 132 patients from an actual, unmatched, randomized experiment, we construct 66 pairs matching for 14 covariates. We then create 10000 unmatched and 10000 matched randomized experiments by repeatedly randomizing the 132 patients, and compare the covariate balance with and without matching. By every measure, every one of the 14 covariates was substantially better balanced when randomization was performed within matched pairs. Even after covariance adjustment for chance imbalances in the 14 covariates, matched randomizations provided more accurate estimates than unmatched randomizations, the increase in accuracy being equivalent to, on average, a 7% increase in sample size. In randomization tests of no treatment effect, matched randomizations using the signed rank test had substantially higher power than unmatched randomizations using the rank sum test, even when only 2 of 14 covariates were relevant to a simulated response. Unmatched randomizations experienced rare disasters which were consistently avoided by matched randomizations.  相似文献   

7.
Restricted randomization designs in clinical trials.   总被引:4,自引:0,他引:4  
R Simon 《Biometrics》1979,35(2):503-512
Though therapeutic clinical trials are often categorized as using either "randomization" or "historical controls" as a basis for treatment evaluation, pure random assignment of treatments is rarely employed. Instead various restricted randomization designs are used. The restrictions include the balancing of treatment assignments over time and the stratification of the assignment with regard to covariates that may affect response. Restricted randomization designs for clinical trials differ from those of other experimental areas because patients arrive sequentially and a balanced design cannot be ensured. The major restricted randomization designs and arguments concerning the proper role of stratification are reviewed here. The effect of randomization restrictions on the validity of significance tests is discussed.  相似文献   

8.
Summary Minimization as an alternative to randomization is gaining popularity for small clinical trials. In response to critics’ questions about the proper analysis of such a trial, proponents have argued that a rerandomization approach, akin to a permutation test with conventional randomization, can be used. However, they add that this computationally intensive approach is not necessary because its results are very similar to those of a t ‐test or test of proportions unless the sample size is very small. We show that minimization applied with unequal allocation causes problems that challenge this conventional wisdom.  相似文献   

9.
Ke Zhu  Hanzhong Liu 《Biometrics》2023,79(3):2127-2142
Rerandomization discards assignments with covariates unbalanced in the treatment and control groups to improve estimation and inference efficiency. However, the acceptance-rejection sampling method used in rerandomization is computationally inefficient. As a result, it is time-consuming for rerandomization to draw numerous independent assignments, which are necessary for performing Fisher randomization tests and constructing randomization-based confidence intervals. To address this problem, we propose a pair-switching rerandomization (PSRR) method to draw balanced assignments efficiently. We obtain the unbiasedness and variance reduction of the difference-in-means estimator and show that the Fisher randomization tests are valid under PSRR. Moreover, we propose an exact approach to invert Fisher randomization tests to confidence intervals, which is faster than the existing methods. In addition, our method is applicable to both nonsequentially and sequentially randomized experiments. We conduct comprehensive simulation studies to compare the finite-sample performance of the proposed method with that of classical rerandomization. Simulation results indicate that PSRR leads to comparable power of Fisher randomization tests and is 3–23 times faster than classical rerandomization. Finally, we apply the PSRR method to analyze two clinical trial datasets, both of which demonstrate the advantages of our method.  相似文献   

10.
Summary Cluster randomization trials with relatively few clusters have been widely used in recent years for evaluation of health‐care strategies. On average, randomized treatment assignment achieves balance in both known and unknown confounding factors between treatment groups, however, in practice investigators can only introduce a small amount of stratification and cannot balance on all the important variables simultaneously. The limitation arises especially when there are many confounding variables in small studies. Such is the case in the INSTINCT trial designed to investigate the effectiveness of an education program in enhancing the tPA use in stroke patients. In this article, we introduce a new randomization design, the balance match weighted (BMW) design, which applies the optimal matching with constraints technique to a prospective randomized design and aims to minimize the mean squared error (MSE) of the treatment effect estimator. A simulation study shows that, under various confounding scenarios, the BMW design can yield substantial reductions in the MSE for the treatment effect estimator compared to a completely randomized or matched‐pair design. The BMW design is also compared with a model‐based approach adjusting for the estimated propensity score and Robins‐Mark‐Newey E‐estimation procedure in terms of efficiency and robustness of the treatment effect estimator. These investigations suggest that the BMW design is more robust and usually, although not always, more efficient than either of the approaches. The design is also seen to be robust against heterogeneous error. We illustrate these methods in proposing a design for the INSTINCT trial.  相似文献   

11.
Minimization is a valuable method for allocating participants between the control and experimental arms of clinical studies. The use of minimization reduces differences that might arise by chance between the study arms in the distribution of patient characteristics such as gender, ethnicity and age. However, unlike randomization, minimization requires real time assessment of each new participant with respect to the preceding distribution of relevant participant characteristics within the different arms of the study. For multi-site studies, this necessitates centralized computational analysis that is shared between all study locations. Unfortunately, there is no suitable freely available open source or free software that can be used for this purpose. OxMaR was developed to enable researchers in any location to use minimization for patient allocation and to access the minimization algorithm using any device that can connect to the internet such as a desktop computer, tablet or mobile phone. The software is complete in itself and requires no special packages or libraries to be installed. It is simple to set up and run over the internet using online facilities which are very low cost or even free to the user. Importantly, it provides real time information on allocation to the study lead or administrator and generates real time distributed backups with each allocation. OxMaR can readily be modified and customised and can also be used for standard randomization. It has been extensively tested and has been used successfully in a low budget multi-centre study. Hitherto, the logistical difficulties involved in minimization have precluded its use in many small studies and this software should allow more widespread use of minimization which should lead to studies with better matched control and experimental arms. OxMaR should be particularly valuable in low resource settings.  相似文献   

12.
The unbiased estimation of fluctuating asymmetry (FA) requires independent repeated measurements on both sides. The statistical analysis of such data is currently performed by a two-way mixed ANOVA analysis. Although this approach produces unbiased estimates of FA, many studies do not utilize this method. This may be attributed in part to the fact that the complete analysis of FA is very cumbersome and cannot be performed automatically with standard statistical software. Therefore, further elaboration of the statistical tools to analyse FA should focus on the usefulness of the method, in order for the correct statistical approaches to be applied more regularly. In this paper we propose a mixed regression model with restricted maximum likelihood (REML) parameter estimation to model FA. This routine yields exactly the same estimates of FA as the two-way mixed ANOVA . Yet the advantages of this approach are that it allows (a) testing the statistical significance of FA, (b) modelling and testing heterogeneity in both FA and measurement error (ME) among samples, (c) testing for nonzero directional asymmetry and (d) obtaining unbiased estimates of individual FA levels. The switch from a mixed two-way ANOVA to a mixed regression model was made to avoid overparametrization. Two simulation studies are presented. The first shows that a previously proposed method to test the significance of FA is incorrect, contrary to our mixed regression approach. In the second simulation study we show that a traditionally applied measure of individual FA [abs(left – right)] is biased by ME. The proposed mixed regression method, however, produces unbiased estimates of individual FA after modelling heterogeneity in ME. The applicability of this method is illustrated with two analyses.  相似文献   

13.
Growing interest in adaptive evolution in natural populations has spurred efforts to infer genetic components of variance and covariance of quantitative characters. Here, I review difficulties inherent in the usual least-squares methods of estimation. A useful alternative approach is that of maximum likelihood (ML). Its particular advantage over least squares is that estimation and testing procedures are well defined, regardless of the design of the data. A modified version of ML, REML, eliminates the bias of ML estimates of variance components. Expressions for the expected bias and variance of estimates obtained from balanced, fully hierarchical designs are presented for ML and REML. Analyses of data simulated from balanced, hierarchical designs reveal differences in the properties of ML, REML, and F-ratio tests of significance. A second simulation study compares properties of REML estimates obtained from a balanced, fully hierarchical design (within-generation analysis) with those from a sampling design including phenotypic data on parents and multiple progeny. It also illustrates the effects of imposing nonnegativity constraints on the estimates. Finally, it reveals that predictions of the behavior of significance tests based on asymptotic theory are not accurate when sample size is small and that constraining the estimates seriously affects properties of the tests. Because of their great flexibility, likelihood methods can serve as a useful tool for estimation of quantitative-genetic parameters in natural populations. Difficulties involved in hypothesis testing remain to be solved.  相似文献   

14.
The problem of evaluating the kinetic parameters associated with an enzyme-catalysed reaction is examined. If the errors associated with velocity measurements are unknown, or are known to deviate from a normal distribution, then methods based on the minimization of the sum of squares are inappropriate. It is shown that by using a proper experimental design then the kinetic parameters may be estimated unambiguously while making only minimal assumptions regarding the error structure of the data. The design consists of several replicate measurements of the velocity at as many experimental conditions as there are parameters to be estimated. Formulae are presented for choosing the experimental conditions for selected kinetic equations and a computer program is described for designing experiments for any kinetic model. The designs formulated are optimal in the sense that they minimize the overall variance of the parameter estimates.  相似文献   

15.
In this paper, we describe a new restricted randomization method called run-reversal equilibrium (RRE), which is a Nash equilibrium of a game where (1) the clinical trial statistician chooses a sequence of medical treatments, and (2) clinical investigators make treatment predictions. RRE randomization counteracts how each investigator could observe treatment histories in order to forecast upcoming treatments. Computation of a run-reversal equilibrium reflects how the treatment history at a particular site is imperfectly correlated with the treatment imbalance for the overall trial. An attractive feature of RRE randomization is that treatment imbalance follows a random walk at each site, while treatment balance is tightly constrained and regularly restored for the overall trial. Less predictable and therefore more scientifically valid experiments can be facilitated by run-reversal equilibrium for multi-site clinical trials.  相似文献   

16.
S R Lipsitz 《Biometrics》1992,48(1):271-281
In many empirical analyses, the response of interest is categorical with an ordinal scale attached. Many investigators prefer to formulate a linear model, assigning scores to each category of the ordinal response and treating it as continuous. When the covariates are categorical, Haber (1985, Computational Statistics and Data Analysis 3, 1-10) has developed a method to obtain maximum likelihood (ML) estimates of the parameters of the linear model using Lagrange multipliers. However, when the covariates are continuous, the only method we found in the literature is ordinary least squares (OLS), performed under the assumption of homogeneous variance. The OLS estimates are unbiased and consistent but, since variance homogeneity is violated, the OLS estimates of variance can be biased and may not be consistent. We discuss a variance estimate (White, 1980, Econometrica 48, 817-838) that is consistent for the true variance of the OLS parameter estimates. The possible bias encountered by using the naive OLS variance estimate is discussed. An estimated generalized least squares (EGLS) estimator is proposed and its efficiency relative to OLS is discussed. Finally, an empirical comparison of OLS, EGLS, and ML estimators is made.  相似文献   

17.
ABSTRACT: Reviews have repeatedly noted important methodological issues in the conduct and reporting of cluster randomized trials (C-RCTs). These reviews usually focus on whether the intra-cluster correlation was explicitly considered in the design and analysis of the C-RCT. However, another important aspect requiring special attention in C-RCTs is the risk for imbalance of covariates at baseline. Imbalance of important covariates at baseline decreases statistical power and precision of the results. Imbalance also reduces face validity and credibility of the trial results. The risk of imbalance is elevated in C-RCTs compared to trials randomizing individuals because of the difficulties in recruiting clusters and the nested nature of correlated patient-level data. A variety of restricted randomization methods have been proposed as way to minimise risk of imbalance. However, there is little guidance regarding how to best restrict randomization for any given C-RCT. The advantages and limitations of different allocation techniques, including stratification, matching, minimization, and covariate-constrained randomization are reviewed as they pertain to C-RCTs to provide investigators with guidance for choosing the best allocation technique for their trial.  相似文献   

18.
Deng et al. have recently proposed that estimates of an upper limit to the rate of spontaneous mutations and their average heterozygous effect can be obtained from the mean and variance of a given fitness trait in naturally segregating populations, provided that allele frequencies are maintained at the balance between mutation and selection. Using simulations they show that this estimation method generally has little bias and is very robust to violations of the mutation-selection balance assumption. Here I show that the particular parameters and models used in these simulations generally reduce the amount of bias that can occur with this estimation method. In particular, the assumption of a large mutation rate in the simulations always implies a low bias of estimates. In addition, the specific model of overdominance used to check the violation of the mutation-selection balance assumption is such that there is not a dramatic decline in mean fitness from overdominant mutations, again implying a low bias of estimates. The assumption of lower mutation rates and/or other models of balancing selection may imply considerably larger biases of the estimates, making the reliability of the proposed method highly questionable.  相似文献   

19.
Meta-regression is widely used in systematic reviews to investigate sources of heterogeneity and the association of study-level covariates with treatment effectiveness. Existing meta-regression approaches are successful in adjusting for baseline covariates, which include real study-level covariates (e.g., publication year) that are invariant within a study and aggregated baseline covariates (e.g., mean age) that differ for each participant but are measured before randomization within a study. However, these methods have several limitations in adjusting for post-randomization variables. Although post-randomization variables share a handful of similarities with baseline covariates, they differ in several aspects. First, baseline covariates can be aggregated at the study level presumably because they are assumed to be balanced by the randomization, while post-randomization variables are not balanced across arms within a study and are commonly aggregated at the arm level. Second, post-randomization variables may interact dynamically with the primary outcome. Third, unlike baseline covariates, post-randomization variables are themselves often important outcomes under investigation. In light of these differences, we propose a Bayesian joint meta-regression approach adjusting for post-randomization variables. The proposed method simultaneously estimates the treatment effect on the primary outcome and on the post-randomization variables. It takes into consideration both between- and within-study variability in post-randomization variables. Studies with missing data in either the primary outcome or the post-randomization variables are included in the joint model to improve estimation. Our method is evaluated by simulations and a real meta-analysis of major depression disorder treatments.  相似文献   

20.
Generalized linear models are a widely used method to obtain parametric estimates for the mean function. They have been further extended to allow the relationship between the mean function and the covariates to be more flexible via generalized additive models. However, the fixed variance structure can in many cases be too restrictive. The extended quasilikelihood (EQL) framework allows for estimation of both the mean and the dispersion/variance as functions of covariates. As for other maximum likelihood methods though, EQL estimates are not resistant to outliers: we need methods to obtain robust estimates for both the mean and the dispersion function. In this article, we obtain functional estimates for the mean and the dispersion that are both robust and smooth. The performance of the proposed method is illustrated via a simulation study and some real data examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号