首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A noniterative procedure based upon the minimum modified X2 approach is employed to test the model of homogeneity of one-dimensional margins in square tables. Such tables may arise from matched pairs with k outcomes. The special case of double dichotomy (i.e. matched pairs with two outcomes) reduces to the McNemar test statistic. The case of multiple matched controls is also dealt with. The Cochran's Q test is used to test the marginal homogeneity in cases comparing m distinct matched samples in addition to testing trends in proportions. Reference is made to the equivalence between these tests and the approach of hierarchical log-linear models for testing marginal homogeneity of square tables.  相似文献   

2.
The general Markov model (GMM) of nucleotide substitution does not assume the evolutionary process to be stationary, reversible, or homogeneous. The GMM can be simplified by assuming the evolutionary process to be stationary. A stationary GMM is appropriate for analyses of phylogenetic data sets that are compositionally homogeneous; a data set is considered to be compositionally homogeneous if a statistical test does not detect significant differences in the marginal distributions of the sequences. Though the general time-reversible (GTR) model assumes stationarity, it also assumes reversibility and homogeneity. We propose two new stationary and nonhomogeneous models--one constrains the GMM to be reversible, whereas the other does not. The two models, coupled with the GTR model, comprise a set of nested models that can be used to test the assumptions of reversibility and homogeneity for stationary processes. The two models are extended to incorporate invariable sites and used to analyze a seven-taxon hominoid data set that displays compositional homogeneity. We show that within the class of stationary models, a nonhomogeneous model fits the hominoid data better than the GTR model. We note that if one considers a wider set of models that are not constrained to be stationary, then an even better fit can be obtained for the hominoid data. However, the methods for reducing model complexity from an extremely large set of nonstationary models are yet to be developed.  相似文献   

3.
传递不平衡检验其实质是边缘齐性检验,需同时考察家系中两个亲代向受累子代的传递情形.但是,其零假设成立时,同一家系中父亲的传递与母亲的传递并不一定相互独立.本文探讨了父母传递的独立性条件对检验统计量的影响.结果表明,即使父母传递不独立时,边缘齐性检验仍然适合.  相似文献   

4.
E J Feuer  L G Kessler 《Biometrics》1989,45(2):629-636
McNemar's (1947, Psychometrika 12, 153-157) test of marginal homogeneity is generalized to a two-sample situation where the hypothesis of interest is that the marginal changes in each of two independently sampled tables are equal. This situation is especially applicable to two cohorts (a control and an intervention cohort), each measured at baseline and after the intervention on a binary outcome variable. Some assumptions often realistic in this situation simplify the calculation of sample size. The calculation of sample size in a study designed to increase utilization of breast cancer screening is demonstrated.  相似文献   

5.
A new application of LEHMACHER'S (1980) marginal homogeneity sign tests is given by analysis of bivariate response curves (or response surfaces) in two unpaired samples of hypertensive versus normotensive patients. Rationale and computations are illustrated by empirical data from sympathomedullary stress research.  相似文献   

6.
By some examples it is shown that the median test is not robust against deviations from homogeneity of slopes. It is also shown that it is not conservative if this assumption does not hold true.  相似文献   

7.
In response to unpredictability of both food availability and core offspring failure, parents of many avian species initially produce more offspring than they commonly rear (overproduction). When parental investment is insufficient to raise the whole brood the handicap of hatching last means ‘marginal’ chicks are less likely to survive if brood reduction occurs. Conversely, if marginal offspring are required as replacements for failed ‘core’ chicks, or parental investment is sufficient to rear the whole brood, the handicap imposed on marginal chicks must be reversible if overproduction is to be a viable strategy. I investigated the ability of marginal offspring to overcome the handicap imposed by hatching asynchrony using a combination of a field experiment, designed to manipulate both the amount of total competition and the relative competitive ability of chicks within a brood, and data on the growth and survival of unmanipulated, three‐chick broods from three consecutive years. The results indicate that, even when resources are abundant, marginal offspring do not begin to overcome the competitive handicap imposed by hatching asynchrony until the period of growth when energetic requirements reach their peak, and subsequent survival to fledging is almost assured. This is apparently a consequence of parents controlling allocation of early parental investment, so that any brood reduction ‘decisions’ can be left as late as possible. Marginal chicks initially channel resources into maintaining mass, relative to skeletal size, as a buffer against starvation. However this also means competitiveness is reduced, so if conditions are poor marginal chicks are rapidly out‐competed, lose condition and die. Conversely, when food availability is good marginal offspring devote more resources to skeletal growth and quickly close the gap on their core siblings, meaning the handicap is reversible. The benefits of overproduction and hatching asynchrony as reproductive strategies to maximise success in Lesser Black‐backed Gulls are discussed in relation to the reproductive alternatives.  相似文献   

8.
For square contingency tables with ordered categories, this note proposes a new method of applying Tomizawa's (1987) 1-weight modified marginal homogeneity models and applies to the 4×4 tables on unaided vision analysed by Tomizawa (1987) and by Stuart (1955). A possible explanation which is derived from fitting those models is first obtained using this new method.  相似文献   

9.
Several authors have noted the dependence of kappa measures of inter-rater agreement on the marginal distributions of contingency tables displaying the joint ratings. This paper introduces a smoothed version of kappa computed after raking the table to achieve pre-specified marginal distributions. A comparison of kappa with raked kappa for various margins can indicate the extent of the dependence on the margins, and can indicate how much of the lack of agreement is due to marginal heterogeneity.  相似文献   

10.
Differences in sensory acuity and hedonic reactions to products lead to latent groups in pooled ratings data. Manufacturing locations and time differences also are sources of rating heterogeneity. Intensity and hedonic ratings are ordered categorical data. Categorical responses follow a multinomial distribution and this distribution can be applied to pooled data over trials if the multinomial probabilities are constant from trial to trial. The common test statistic used for comparing vectors of proportions or frequencies is the Pearson chi-square statistic. When ratings data are obtained from repeated ratings experiments or from a cluster sampling procedure, the covariance matrix for the vector of category proportions can differ dramatically from the one assumed for the multinomial model because of inter-trial. This effect is referred to as overdispersion. The standard multinomial model does not fit overdispersed multinomial data. The practical implication of this is that an inflated Type I error can result in a seriously erroneous conclusion. Another implication is that overdispersion is a measurable quantity that may be of interest because it can be used to signal the presence of latent segments. The Dirichlet-Multinomial (DM) model is introduced in this paper to fit overdispersed intensity and hedonic ratings data. Methods for estimating the parameters of the DM model and the test statistics based on them to test against a specified vector or compare vectors of proportions are given. A novel theoretical contribution of this paper is a method for calculating the power of the tests. This method is useful both in evaluating the tests and determining sample size and the number of trials. A test for goodness of fit of the multinomial model against the DM model is also given. The DM model can be extended further to the Generalized Dirichlet-Multinomial (GDM) model, in which multiple sources of variation are considered. The GDM model and its applications are discussed in this paper. Applications of the DM and GDM models in sensory and consumer research are illustrated using numerical examples.  相似文献   

11.
Carboxypeptidase Y pulses, applied after various times of refolding, were employed to probe the accessibility of the C-terminus of RNAase A during the refolding process. The increase in resistance against proteolytic cleavage was measured by determination of the amount of liberated C-terminal amino acids and by activity assays. The results indicate that the C-terminus of RNAase becomes inaccessible early in the course of refolding, if folding is carried out at low temperatures under conditions that effectively stabilize the native state. At higher temperatures (25 degrees C) or under conditions of marginal stability, intermediates are not populated and protection against proteolytic cleavage is not detectable before the formation of the native state. The method described may be used to monitor the accessibility of the C-terminus of various proteins during refolding. However, intermediates on the folding pathway can only be observed if the native state is stable against carboxypeptidase attack.  相似文献   

12.
A method for analysing dependent agreement data with categorical responses is proposed. A generalized estimating equation approach is developed with two sets of equations. The first set models the marginal distribution of categorical ratings, and the second set models the pairwise association of ratings with the kappa coefficient (kappa) as a metric. Covariates can be incorporated into both sets of equations. This approach is compared with a latent variable model that assumes an underlying multivariate normal distribution in which the intraclass correlation coefficient is used as a measure of association. Examples are from a cervical ectopy study and the National Heart, Lung, and Blood Institute Veteran Twin Study.  相似文献   

13.
This work is motivated by clinical trials in chronic heart failure disease, where treatment has effects both on morbidity (assessed as recurrent non‐fatal hospitalisations) and on mortality (assessed as cardiovascular death, CV death). Recently, a joint frailty proportional hazards model has been proposed for these kind of efficacy outcomes to account for a potential association between the risk rates for hospital admissions and CV death. However, more often clinical trial results are presented by treatment effect estimates that have been derived from marginal proportional hazards models, that is, a Cox model for mortality and an Andersen–Gill model for recurrent hospitalisations. We show how these marginal hazard ratios and their estimates depend on the association between the risk processes, when these are actually linked by shared or dependent frailty terms. First we derive the marginal hazard ratios as a function of time. Then, applying least false parameter theory, we show that the marginal hazard ratio estimate for the hospitalisation rate depends on study duration and on parameters of the underlying joint frailty model. In particular, we identify parameters, for example the treatment effect on mortality, that determine if the marginal hazard ratio estimate for hospitalisations is smaller, equal or larger than the conditional one. How this affects rejection probabilities is further investigated in simulation studies. Our findings can be used to interpret marginal hazard ratio estimates in heart failure trials and are illustrated by the results of the CHARM‐Preserved trial (where CHARM is the ‘Candesartan in Heart failure Assessment of Reduction in Mortality and morbidity’ programme).  相似文献   

14.
Summary .  Many assessment instruments used in the evaluation of toxicity, safety, pain, or disease progression consider multiple ordinal endpoints to fully capture the presence and severity of treatment effects. Contingency tables underlying these correlated responses are often sparse and imbalanced, rendering asymptotic results unreliable or model fitting prohibitively complex without overly simplistic assumptions on the marginal and joint distribution. Instead of a modeling approach, we look at stochastic order and marginal inhomogeneity as an expression or manifestation of a treatment effect under much weaker assumptions. Often, endpoints are grouped together into physiological domains or by the body function they describe. We derive tests based on these subgroups, which might supplement or replace the individual endpoint analysis because they are more powerful. The permutation or bootstrap distribution is used throughout to obtain global, subgroup, and individual significance levels as they naturally incorporate the correlation among endpoints. We provide a theorem that establishes a connection between marginal homogeneity and the stronger exchangeability assumption under the permutation approach. Multiplicity adjustments for the individual endpoints are obtained via stepdown procedures, while subgroup significance levels are adjusted via the full closed testing procedure. The proposed methodology is illustrated using a collection of 25 correlated ordinal endpoints, grouped into six domains, to evaluate toxicity of a chemical compound.  相似文献   

15.
We have examined the relations among three common treatment outcome measures in irritable bowel syndrome (IBS): end of treatment global ratings by a physician, end of treatment patient global ratings, and measures derived from a daily symptom diary completed by the patient. Eighty-four IBS patients (53 female, 31 male) participated in a randomized controlled evaluation of three psychological treatment conditions for IBS. Treatment outcome measures from this trial (Blanchard et al., 1992) were used in the present methodological study. Physician global ratings were significantly correlated with patient global ratings (r = .45, p < .01). Both of these global ratings also correlated significantly with a composite score from patient diary ratings. Multiple regression analyses revealed that reductions in bloating and constipation account for 18% of the variance in patient global ratings. Global ratings at end of treatment by either patient or physician were only partially related to symptom relief as measured by a daily diary.  相似文献   

16.
When analyzing clinical trials with a stratified population, homogeneity of treatment effects is a common assumption in survival analysis. However, in the context of recent developments in clinical trial design, which aim to test multiple targeted therapies in corresponding subpopulations simultaneously, the assumption that there is no treatment‐by‐stratum interaction seems inappropriate. It becomes an issue if the expected sample size of the strata makes it unfeasible to analyze the trial arms individually. Alternatively, one might choose as primary aim to prove efficacy of the overall (targeted) treatment strategy. When testing for the overall treatment effect, a violation of the no‐interaction assumption renders it necessary to deviate from standard methods that rely on this assumption. We investigate the performance of different methods for sample size calculation and data analysis under heterogeneous treatment effects. The commonly used sample size formula by Schoenfeld is compared to another formula by Lachin and Foulkes, and to an extension of Schoenfeld's formula allowing for stratification. Beyond the widely used (stratified) Cox model, we explore the lognormal shared frailty model, and a two‐step analysis approach as potential alternatives that attempt to adjust for interstrata heterogeneity. We carry out a simulation study for a trial with three strata and violations of the no‐interaction assumption. The extension of Schoenfeld's formula to heterogeneous strata effects provides the most reliable sample size with respect to desired versus actual power. The two‐step analysis and frailty model prove to be more robust against loss of power caused by heterogeneous treatment effects than the stratified Cox model and should be preferred in such situations.  相似文献   

17.
The diversifying selection due to genotype-environment interaction can increase the genetic variation in natural populations. It is known, however, that the conditions for stable genetic polymorphism or marginal overdominance are quite restricted in this selection model. In this paper a simple model of diversifying selection was examined, and the following results were obtained: (1) Even when the conditions for marginal overdominance are not satisfied, if the diversifying selection is operating, the frequency of mutants can be higher than that in the case of simple mutation-selection balance. (2) This selection model causes a large amount of genetic load (environment load), even when the conditions for marginal overdominance are not satisfied, namely even when the equilibrium frequency of mutant is very low. From these results it can be concluded that the number of loci on which this type of diversifying selection is operating is very small, if any.  相似文献   

18.
It is common in epidemiologic analyses to summarize continuous outcomes as falling above or below a threshold. With paired data and with a threshold chosen without reference to the outcomes, McNemar's test of marginal homogeneity may be applied to the resulting dichotomous pairs when testing for equality of the marginal distributions of the underlying continuous outcomes. If the threshold is chosen to maximize the test statistic, however, referring the resulting test statistic to the nominal chi 2 distribution is incorrect; instead, the p-value must be adjusted for the multiple comparisons. Here the distribution of a maximally selected McNemar's statistic is derived, and it is shown that an approximation due to Durbin (1985, Journal of Applied Probability 22, 99-122) may be used to estimate approximate p-values. The methodology is illustrated by an application to measurements of insulin-like growth factor-I (IGF-I) in matched prostate cancer cases and controls from the Physicians' Health Study. The results of simulation experiments that assess the accuracy of the approximation in moderate sample sizes are reported.  相似文献   

19.
Because all spiders are predators and most subdue their prey with poison, it has been suggested that fear of spiders is an evolutionary adaptation. However, it has not been sufficiently examined whether other arthropods similarly elicit fear or disgust. Our aim was to examine if all arthropods are rated similarly, if only potentially dangerous arthropods (spiders, bees/wasps) elicit comparable responses, or if spiders are rated in a unique way. We presented pictures of arthropods (15 spiders, 15 beetles, 15 bees/wasps, and 15 butterflies/moths) to 76 students who rated each picture for fear, disgust, and how dangerous they thought the animal is. They also categorized each animal into one of the four animal groups. In addition, we assessed the participants' fear of spiders and estimates for trait anxiety. The ratings showed that spiders elicit significantly greater fear and disgust than any other arthropod group, and spiders were rated as more dangerous. Fear and disgust ratings of spider pictures significantly predicted the questionnaire scores for fear of spiders, whereas dangerousness ratings of spiders and ratings of other arthropods do not provide any predictive power. Thus, spider fear is in fact spider specific. Our results demonstrate that potential harmfulness alone cannot explain why spiders are feared so frequently.  相似文献   

20.
This paper addresses treatment effect heterogeneity (also referred to, more compactly, as 'treatment heterogeneity') in the context of a controlled clinical trial with binary endpoints. Treatment heterogeneity, variation in the true (causal) individual treatment effects, is explored using the concept of the potential outcome. This framework supposes the existance of latent responses for each subject corresponding to each possible treatment. In the context of a binary endpoint, treatment heterogeniety may be represented by the parameter, pi2, the probability that an individual would have a failure on the experimental treatment, if received, and would have a success on control, if received. Previous research derived bounds for pi2 based on matched pairs data. The present research extends this method to the blocked data context. Estimates (and their variances) and confidence intervals for the bounds are derived. We apply the new method to data from a renal disease clinical trial. In this example, bounds based on the blocked data are narrower than the corresponding bounds based only on the marginal success proportions. Some remaining challenges (including the possibility of further reducing bound widths) are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号