首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The theoretical and practical aspects of how panelists might be viewed are investigated. The situation where panelists are viewed as random selections (random effects) from a population of all possible panelists is compared to viewing panelists as the entire population of interest (fixed effects). The statistical implications of each viewpoint are investigated. Sources of variation related to the panel, panelists and the testing environment are discussed. An argument is presented concluding that in most situations panelists should be viewed as random effects. This allows results to be related to a larger population of prospective panelists.  相似文献   

2.
3.
On Monitoring Outcomes of Medical Providers   总被引:1,自引:0,他引:1  
An issue of substantial importance is the monitoring and improvement of health care facilities such as hospitals, nursing homes, dialysis units or surgical wards. In addressing this, there is a need for appropriate methods for monitoring health outcomes. On the one hand, statistical tools are needed to aid centers in instituting and evaluating quality improvement programs and, on the other hand, to aid overseers and payers in identifying and addressing sub-standard performance. In the latter case, the aim is to identify situations where there is evidence that the facility’s outcomes are outside of normal expectations; such facilities would be flagged and perhaps audited for potential difficulties or censured in some way. Methods in use are based on models where the center effects are taken as fixed or random. We take a systematic approach to assessing the merits of these methods when the patient outcome of interest arises from a linear model. We argue that methods based on fixed effects are more appropriate for the task of identifying extreme outcomes by providing better accuracy when the true facility effect is far from that of the average facility and avoiding confounding issues that arise in the random effects models when the patient risks are correlated with facility effects. Finally, we consider approaches to flagging that are based on the Z-statistics arising from the fixed effects model, but which account in a robust way for the intrinsic variation between facilities as contemplated in the standard random effects model. We provide an illustration in monitoring survival outcomes of dialysis facilities in the US.  相似文献   

4.
Biological data are often intrinsically hierarchical (e.g., species from different genera, plants within different mountain regions), which made mixed‐effects models a common analysis tool in ecology and evolution because they can account for the non‐independence. Many questions around their practical applications are solved but one is still debated: Should we treat a grouping variable with a low number of levels as a random or fixed effect? In such situations, the variance estimate of the random effect can be imprecise, but it is unknown if this affects statistical power and type I error rates of the fixed effects of interest. Here, we analyzed the consequences of treating a grouping variable with 2–8 levels as fixed or random effect in correctly specified and alternative models (under‐ or overparametrized models). We calculated type I error rates and statistical power for all‐model specifications and quantified the influences of study design on these quantities. We found no influence of model choice on type I error rate and power on the population‐level effect (slope) for random intercept‐only models. However, with varying intercepts and slopes in the data‐generating process, using a random slope and intercept model, and switching to a fixed‐effects model, in case of a singular fit, avoids overconfidence in the results. Additionally, the number and difference between levels strongly influences power and type I error. We conclude that inferring the correct random‐effect structure is of great importance to obtain correct type I error rates. We encourage to start with a mixed‐effects model independent of the number of levels in the grouping variable and switch to a fixed‐effects model only in case of a singular fit. With these recommendations, we allow for more informative choices about study design and data analysis and make ecological inference with mixed‐effects models more robust for small number of levels.  相似文献   

5.
Preference testing is commonly used in consumer sensory evaluation. Traditionally, it is done without replication, effectively leading to a single 0/1 (binary) measurement on each panelist. However, to understand the nature of the preference, replicated preference tests are a better approach, resulting in binomial counts of preferences on each panelist. Variability among panelists then leads to overdispersion of the counts when the binomial model is used and to an inflated Type I error rate for statistical tests of preference. Overdispersion can be adjusted by Pearson correction or by other models such as correlated binomial or beta‐binomial. Several methods are suggested or reviewed in this study for analyzing replicated preference tests and their Type I error rates and power are compared. Simulation studies show that all methods have reasonable Type I error rates and similar power. Among them, the binomial model with Pearson adjustment is probably the safest way to analyze replicated preference tests, while a normal model in which the binomial distribution is not assumed is the easiest.  相似文献   

6.
CHEESE HARDNESS ASSESSMENT BY EXPERTS AND UNTRAINED JUDGES   总被引:1,自引:0,他引:1  
Although expert assessment of food characteristics is recognized as a key step in product development, the use of consumer based measurements is sometimes recommended as an equivalent to the experts. From cognitive psychology, support of the role of perceptual learning is found in some instances, although this could not be relevant in others. To address this point performance analysis of experts and untrained panelists in cheese texture evaluation was carried out. Neither the untrained panelists nor the experts were familiar with either the scales or the kind of cheese. The same Cheddar cheese was given to 44 untrained subjects in three trials to assess hardness. The results showed that their judgment has a 29% average random error variance; the interrater reliability being low. The same experiment gave a random error variance of 2% for three highly skilled judges (experts). The difference in variance was linked to training. Untrained panelists also showed an adaptation error. Nevertheless, there was no significant difference between the average ratings of both groups, whether untrained or experts.  相似文献   

7.
In applied entomological experiments, when the response is a count-type variable, certain transformation remedies such as the square root, logarithm (log), or rank transformation are often used to normalize data before analysis of variance. In this study, we examine the usefulness of these transformations by reanalyzing field-collected data from a split-plot experiment and by performing a more comprehensive simulation study of factorial and split-plot experiments. For field-collected data, significant interactions were dependent upon the type of transformation. For the simulation study, Poisson distributed errors were used for a 2 by 2 factorial arrangement, in both randomized complete block and split-plot settings. Various sizes of main effects were induced, and type I error rates and powers of the tests for interaction were examined for the raw response values, log-, square root-, and rank-transformed responses. The aligned rank transformation also was investigated because it has been shown to perform well in testing interactions in factorial arrangements. We found that for testing interactions, the untransformed response and the aligned rank response performed best (preserved nominal type I error rates), whereas the other transformations had inflated error rates when main effects were present. No evaluations of the tests for main effects or simple effects have been conducted. Potentially these transformations will still be necessary when performing these tests.  相似文献   

8.
J D Arendt 《Heredity》2015,115(4):306-311
Phenotypic plasticity is thought to have a role in driving population establishment, local adaptation and speciation. However, dispersal plasticity has been underappreciated in this literature. Plasticity in the decision to disperse is taxonomically widespread and I provide examples for insects, molluscs, polychaetes, vertebrates and flowering plants. Theoretical work is limited but indicates an interaction between dispersal distance and plasticity in the decision to disperse. When dispersal is confined to adjacent patches, dispersal plasticity may enhance local adaptation over unconditional (non-plastic) dispersal. However, when dispersal distances are greater, plasticity in dispersal decisions strongly reduces the potential for local adaptation and population divergence. Upon dispersal, settlement may be random, biased but genetically determined, or biased but plastically determined. Theory shows that biased settlement of either type increases population divergence over random settlement. One model suggests that plasticity further enhances chances of speciation. However, there are many strategies for deciding on where to settle such as a best-of-N strategy, sequential sampling with a threshold for acceptance or matching with natal habitat. To date, these strategies do not seem to have been compared within a single model. Although we are just beginning to explore evolutionary effects of dispersal plasticity, it clearly has the potential to enhance as well as inhibit population divergence. Additional work should pay particular attention to dispersal distance and the strategy used to decide on where to settle.  相似文献   

9.
The objectives of this study were to determine the distribution of the sensory panelists' ability to detect differences and to improve the triangle test by minimizing unnecessary guessing. The triangle test was modified to include the use of economic incentives through which panelists voluntarily revealed their ability to detect differences. Panelists were asked to estimate their ability to detect differences and the probability of identifying the odd sample in a triangle test. They were then organized into three ability groups according to their responses. Double triangle tests, followed by triangle tests with economic incentives, were used to evaluate a cereal product and a beverage. The ability to detect differences was modeled as a probability, and the distribution of panelists was estimated. The economic incentives test was more effective when used with the beverage in which differences were less difficult to detect. We found that the economic incentive test discouraged the panelists from guessing unnecessarily, thus increasing the motivation of the panelists to detect differences, and allowing researchers to determine the distribution of discrimination ability.  相似文献   

10.
Nummi T  Pan J  Siren T  Liu K 《Biometrics》2011,67(3):871-875
Summary In most research on smoothing splines the focus has been on estimation, while inference, especially hypothesis testing, has received less attention. By defining design matrices for fixed and random effects and the structure of the covariance matrices of random errors in an appropriate way, the cubic smoothing spline admits a mixed model formulation, which places this nonparametric smoother firmly in a parametric setting. Thus nonlinear curves can be included with random effects and random coefficients. The smoothing parameter is the ratio of the random‐coefficient and error variances and tests for linear regression reduce to tests for zero random‐coefficient variances. We propose an exact F‐test for the situation and investigate its performance in a real pine stem data set and by simulation experiments. Under certain conditions the suggested methods can also be applied when the data are dependent.  相似文献   

11.
Most methods for testing association in the presence of linkage, using family-based studies, have been developed for continuous traits. FBAT (family-based association tests) is one of few methods appropriate for discrete outcomes. In this article we describe a new test of association in the presence of linkage for binary traits. We use a gamma random effects model in which association and linkage are modelled as fixed effects and random effects, respectively. We have compared the gamma random effects model to an FBAT and a generalized estimating equation-based alternative, using two regions in the Genetic Analysis Workshop 14 simulated data. One of these regions contained haplotypes associated with disease, and the other did not.  相似文献   

12.
The refuge strategy is designed to delay evolution of pest resistance to transgenic crops producing Bacillus thuringiensis Berliner (Bt) toxins. Movement of insects between Bt crops and refuges of non-Bt crops is essential for the refuge strategy because it increases chances that resistant adults mate with susceptible adults from refuges. Conclusions about optimal levels of movement for delaying resistance are not consistent among previous modeling studies. To clarify the effects of movement on resistance evolution, we analyzed simulations of a spatially explicit model based partly on the interaction of pink bollworm, Pectinophora gossypiella (Saunders), with Bt cotton. We examined resistance evolution as a function of insect movement under 12 sets of assumptions about the relative abundance of Bt cotton (50 and 75%), temporal distribution of Bt cotton and refuge fields (fixed, partial rotation, and full rotation), and spatial distribution of fields (random and uniform). The results show that interactions among the relative abundance and distribution of refuges and Bt cotton fields can alter the effects of movement on resistance evolution. The results also suggest that differences in conclusions among previous studies can be explained by differences in assumptions about the relative abundance and distribution of refuges and Bt crop fields. With fixed field locations and all Bt cotton fields adjacent to at least one refuge, resistance evolved slowest with low movement. However, low movement and fixed field locations favored rapid resistance evolution when some Bt crop fields were isolated from refuges. When refuges and Bt cotton fields were rotated to the opposite crop type each year, resistance evolved fastest with low movement. Nonrecessive inheritance of resistance caused rapid resistanceevolution regardless of movement rate. Confirming previous reports, results described here show that resistance can be delayed effectively by fixing field locations and distributing refuges uniformly to ensure that Bt crop fields are not isolated from refuges. However, rotating fields provided better insect control and reduced the need for insecticide sprays.  相似文献   

13.
Recent statistical methodology for precision medicine has focused on either identification of subgroups with enhanced treatment effects or estimating optimal treatment decision rules so that treatment is allocated in a way that maximizes, on average, predefined patient outcomes. Less attention has been given to subgroup testing, which involves evaluation of whether at least a subgroup of the population benefits from an investigative treatment, compared to some control or standard of care. In this work, we propose a general framework for testing for the existence of a subgroup with enhanced treatment effects based on the difference of the estimated value functions under an estimated optimal treatment regime and a fixed regime that assigns everyone to the same treatment. Our proposed test does not require specification of the parametric form of the subgroup and allows heterogeneous treatment effects within the subgroup. The test applies to cases when the outcome of interest is either a time-to-event or a (uncensored) scalar, and is valid at the exceptional law. To demonstrate the empirical performance of the proposed test, we study the type I error and power of the test statistics in simulations and also apply our test to data from a Phase III trial in patients with hematological malignancies.  相似文献   

14.
The problem of testing the equality of means of two normal populations is considered when independent random samples of random sizes are given with the total number of observations from both populations being a fixed number. An application in forestry is discussed.  相似文献   

15.
The present study assesses the effects of genotyping errors on the type I error rate of a particular transmission/disequilibrium test (TDT(std)), which assumes that data are errorless, and introduces a new transmission/disequilibrium test (TDT(ae)) that allows for random genotyping errors. We evaluate the type I error rate and power of the TDT(ae) under a variety of simulations and perform a power comparison between the TDT(std) and the TDT(ae), for errorless data. Both the TDT(std) and the TDT(ae) statistics are computed as two times a log-likelihood difference, and both are asymptotically distributed as chi(2) with 1 df. Genotype data for trios are simulated under a null hypothesis and under an alternative (power) hypothesis. For each simulation, errors are introduced randomly via a computer algorithm with different probabilities (called "allelic error rates"). The TDT(std) statistic is computed on all trios that show Mendelian consistency, whereas the TDT(ae) statistic is computed on all trios. The results indicate that TDT(std) shows a significant increase in type I error when applied to data in which inconsistent trios are removed. This type I error increases both with an increase in sample size and with an increase in the allelic error rates. TDT(ae) always maintains correct type I error rates for the simulations considered. Factors affecting the power of the TDT(ae) are discussed. Finally, the power of TDT(std) is at least that of TDT(ae) for simulations with errorless data. Because data are rarely error free, we recommend that researchers use methods, such as the TDT(ae), that allow for errors in genotype data.  相似文献   

16.
The mixed-model factorial analysis of variance has been used in many recent studies in evolutionary quantitative genetics. Two competing formulations of the mixed-model ANOVA are commonly used, the “Scheffe” model and the “SAS” model; these models differ in both their assumptions and in the way in which variance components due to the main effect of random factors are defined. The biological meanings of the two variance component definitions have often been unappreciated, however. A full understanding of these meanings leads to the conclusion that the mixed-model ANOVA could have been used to much greater effect by many recent authors. The variance component due to the random main effect under the two-way SAS model is the covariance in true means associated with a level of the random factor (e.g., families) across levels of the fixed factor (e.g., environments). Therefore the SAS model has a natural application for estimating the genetic correlation between a character expressed in different environments and testing whether it differs from zero. The variance component due to the random main effect under the two-way Scheffe model is the variance in marginal means (i.e., means over levels of the fixed factor) among levels of the random factor. Therefore the Scheffe model has a natural application for estimating genetic variances and heritabilities in populations using a defined mixture of environments. Procedures and assumptions necessary for these applications of the models are discussed. While exact significance tests under the SAS model require balanced data and the assumptions that family effects are normally distributed with equal variances in the different environments, the model can be useful even when these conditions are not met (e.g., for providing an unbiased estimate of the across-environment genetic covariance). Contrary to statements in a recent paper, exact significance tests regarding the variance in marginal means as well as unbiased estimates can be readily obtained from unbalanced designs with no restrictive assumptions about the distributions or variance-covariance structure of family effects.  相似文献   

17.
We examined Type I error rates of Felsenstein's (1985; Am. Nat. 125:1-15) comparative method of phylogenetically independent contrasts when branch lengths are in error and the model of evolution is not Brownian motion. We used seven evolutionary models, six of which depart strongly from Brownian motion, to simulate the evolution of two continuously valued characters along two different phylogenies (15 and 49 species). First, we examined the performance of independent contrasts when branch lengths are distorted systematically, for example, by taking the square root of each branch segment. These distortions often caused inflated Type I error rates, but performance was almost always restored when branch length transformations were used. Next, we investigated effects of random errors in branch lengths. After the data were simulated, we added errors to the branch lengths and then used the altered phylogenies to estimate character correlations. Errors in the branches could be of two types: fixed, where branch lengths are either shortened or lengthened by a fixed fraction; or variable, where the error is a normal variate with mean zero and the variance is scaled to the length of the branch (so that expected error relative to branch length is constant for the whole tree). Thus, the error added is unrelated to the microevolutionary model. Without branch length checks and transformations, independent contrasts tended to yield extremely inflated and highly variable Type I error rates. Type I error rates were reduced, however, when branch lengths were checked and transformed as proposed by Garland et al. (1992; Syst. Biol. 41:18-32), and almost never exceeded twice the nominal P-value at alpha = 0.05. Our results also indicate that, if branch length transformations are applied, then the appropriate degrees of freedom for testing the significance of a correlation coefficient should, in general, be reduced to account for estimation of the best branch length transformation. These results extend those reported in Díaz-Uriarte and Garland (1996; Syst. Biol. 45:27-47), and show that, even with errors in branch lengths and evolutionary models different from Brownian motion, independent contrasts are a robust method for testing hypotheses of correlated evolution.  相似文献   

18.
LMC (local mate competition) was first introduced by W. D. Hamilton to explain extraordinary female-biased sex ratios observed in a variety of insects and mites. In the original model, the population is subdivided into an infinite number of colonies founded by a fixed number of inseminated females producing the same very large number of offspring. The male offspring compete within the colonies to inseminate the female offspring and then these disperse at random to found new colonies. An unbeatable sex ratio strategy is found to be female-biased. In this paper, the effects of having colonies of random size and foundresses producing a random finite number of offspring are considered. The exact evolutionarily stable strategy (ESS) sex ratio is deduced and comparisons with previous approximate or numerical results are made. As the mean or the variance of brood size increases, the ESS sex ratio becomes more female-biased. An increase in the variance of colony size increases the ESS proportion of males when the mean brood size and colony size are both small, but decreases this proportion when the mean brood size or the mean colony size is large.  相似文献   

19.
Models and estimention procedures are given for linear regression models in discrete distributions when the regression contains both fixed and random effects. The methods are developed for discrete variables with typically a small number of possible outcomes such as occurs in ordinal regression. The method is applied to a problem arising in the comparison of microbiological test methods.  相似文献   

20.
The augmentation of categorical outcomes with underlying Gaussian variables in bivariate generalized mixed effects models has facilitated the joint modeling of continuous and binary response variables. These models typically assume that random effects and residual effects (co)variances are homogeneous across all clusters and subjects, respectively. Motivated by conflicting evidence about the association between performance outcomes in dairy production systems, we consider the situation where these (co)variance parameters may themselves be functions of systematic and/or random effects. We present a hierarchical Bayesian extension of bivariate generalized linear models whereby functions of the (co)variance matrices are specified as linear combinations of fixed and random effects following a square‐root‐free Cholesky reparameterization that ensures necessary positive semidefinite constraints. We test the proposed model by simulation and apply it to the analysis of a dairy cattle data set in which the random herd‐level and residual cow‐level effects (co)variances between a continuous production trait and binary reproduction trait are modeled as functions of fixed management effects and random cluster effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号