共查询到20条相似文献,搜索用时 15 毫秒
1.
Weighted least-squares approach for comparing correlated kappa 总被引:3,自引:0,他引:3
In the medical sciences, studies are often designed to assess the agreement between different raters or different instruments. The kappa coefficient is a popular index of agreement for binary and categorical ratings. Here we focus on testing for the equality of two dependent kappa coefficients. We use the weighted least-squares (WLS) approach of Koch et al. (1977, Biometrics 33, 133-158) to take into account the correlation between the estimated kappa statistics. We demonstrate how the SAS PROC CATMOD can be used to test for the equality of dependent Cohen's kappa coefficients and dependent intraclass kappa coefficients with nominal categorical ratings. We also test for the equality of dependent Cohen's kappa and dependent weighted kappa with ordinal ratings. The major advantage of the WLS approach is that it allows the data analyst a way of testing dependent kappa with popular SAS software. The WLS approach can handle any number of categories. Analyses of three biomedical studies are used for illustration. 相似文献
2.
构建系统发生树时,其拓扑结构会在不同的基因组区域产生不一致性。对此问题,贝叶斯一致性分析法(BCA)可在全基因组规模上进行系统发生树分析,并进而对不一致性信息进行量化统计。采用此方法对由C3H/Hu小鼠(Mus musculus)和129Sv小鼠回交多代产生的129S1小鼠进行系统发生树分析,输入相应的一组序列文件,用若干生物信息学软件(如VCFtools,Repeat Masker,PAUP*4.0,Mr Model Test,Mr Bayes等)对其进行屏蔽重复序列、序列比对等处理,辅以Perl语言脚本,最终得到全基因组范围不同区段系统发生树不一致信息。在小鼠10号染色体的所有99个基因座中,支持129S1和129Sv品系小鼠为姐妹关系的拓扑结构占了84.7%(后验概率最高),这证明了C3H/Hu小鼠对129S1小鼠基因组的贡献程度较小。结果表明,贝叶斯一致性分析法有助于基因组不同区段进化历史的研究。 相似文献
3.
Fay MP 《Biostatistics (Oxford, England)》2005,6(1):171-180
Agreement coefficients quantify how well a set of instruments agree in measuring some response on a population of interest. Many standard agreement coefficients (e.g. kappa for nominal, weighted kappa for ordinal, and the concordance correlation coefficient (CCC) for continuous responses) may indicate increasing agreement as the marginal distributions of the two instruments become more different even as the true cost of disagreement stays the same or increases. This problem has been described for the kappa coefficients; here we describe it for the CCC. We propose a solution for all types of responses in the form of random marginal agreement coefficients (RMACs), which use a different adjustment for chance than the standard agreement coefficients. Standard agreement coefficients model chance agreement using expected agreement between two independent random variables each distributed according to the marginal distribution of one of the instruments. RMACs adjust for chance by modeling two independent readings both from the mixture distribution that averages the two marginal distributions. In other words, both independent readings represent first a random choice of instrument, then a random draw from the marginal distribution of the chosen instrument. The advantage of the resulting RMAC is that differences between the two marginal distributions will not induce greater apparent agreement. As with the standard agreement coefficients, the RMACs do not require any assumptions about the bivariate distribution of the random variables associated with the two instruments. We describe the RMAC for nominal, ordinal and continuous data, and show through the delta method how to approximate the variances of some important special cases. 相似文献
4.
5.
6.
Josep L. Carrasco 《Biometrics》2010,66(3):897-904
Summary The classical concordance correlation coefficient (CCC) to measure agreement among a set of observers assumes data to be distributed as normal and a linear relationship between the mean and the subject and observer effects. Here, the CCC is generalized to afford any distribution from the exponential family by means of the generalized linear mixed models (GLMMs) theory and applied to the case of overdispersed count data. An example of CD34+ cell count data is provided to show the applicability of the procedure. In the latter case, different CCCs are defined and applied to the data by changing the GLMM that fits the data. A simulation study is carried out to explore the behavior of the procedure with a small and moderate sample size. 相似文献
7.
Summary This article considers receiver operating characteristic (ROC) analysis for bivariate marker measurements. The research interest is to extend tools and rules from univariate marker to bivariate marker setting for evaluating predictive accuracy of markers using a tree‐based classification rule. Using an and–or classifier, an ROC function together with a weighted ROC function (WROC) and their conjugate counterparts are proposed for examining the performance of bivariate markers. The proposed functions evaluate the performance of and–or classifiers among all possible combinations of marker values, and are ideal measures for understanding the predictability of biomarkers in target population. Specific features of ROC and WROC functions and other related statistics are discussed in comparison with those familiar properties for univariate marker. Nonparametric methods are developed for estimating ROC‐related functions (partial) area under curve and concordance probability. With emphasis on average performance of markers, the proposed procedures and inferential results are useful for evaluating marker predictability based on a single or bivariate marker (or test) measurements with different choices of markers, and for evaluating different and–or combinations in classifiers. The inferential results developed in this article also extend to multivariate markers with a sequence of arbitrarily combined and–or classifier. 相似文献
8.
Formulas for the variance of direct standardized rates are given for three different sampling models. The three models are product multinomial models when population totals are fixed by design, or strata totals are fixed by design, or cell (each population—each stratum) totals are fixed by design. Asymptotic distributions are derived for each model. A discussion on the relevance and use of standardized rates and the need for distribution theory is also provided. 相似文献
9.
D. S. Virk P. S. Virk B. K. Mangat G. Harinarayana 《TAG. Theoretical and applied genetics. Theoretische und angewandte Genetik》1991,81(4):559-561
Summary The normally used joint linear regression analysis (OLS) is not appropriate for comparing estimates of stability parameters of varieties when the error variances of site means are heterogeneous. Weighted regression analysis (WLS), in these situations, yields more precise estimates of stability parameters. A comparison of the two analytical methods using the grain yield (kg ha–1) data of 12 varieties and one hybrid of pearl millet [Pennisetum typhoides (Burm.) S. & H.], tested at 26 sites in India, revealed that the weighted regression analysis yields more efficient estimates of regression coefficients (b
i
) than the ordinary regression analysis, and that the standard errors of b
i
values were reduced by up to 43%. The estimated b
i
differed with the two procedures. The number of varieties with b
i
ssignificantly deviating from unity was not only more (five varieties) with weighted regression analysis than the ordinary regression analysis (one variety), but the classification of varieties as possessing general or specific adaptation differed with the two procedures. 相似文献
10.
11.
T. Hohls G. P. Y. Clarke H. O. Gevers 《Biometrical journal. Biometrische Zeitschrift》1994,36(8):963-981
The matrix algebra needed for analysing data from diallel experiments across several environments is provided. Practical considerations for subdividing large incidence matrices and appending constraint matrices are discussed to allow large experiments to be analysed on desk top computers. In particular, a combining ability model and heterotic pattern model have been used to show how heterogeneous variances can be accommodated by weighting. The models provide valuable information on the performance of lines in crosses, the heterotic patterns of the lines, and the heterotic groups to which they belong. The methodology is illustrated by application to a diallel cross conducted among 12 elite white modified opaque-2 maize inbred lines. 相似文献
12.
13.
R. K. Misra 《Biometrical journal. Biometrische Zeitschrift》1979,21(8):763-766
Numerous occurrences of natural hybridization have been known in many groups of animals. It has a bearing on growth, nomenclature, speciation, genetics and wildlife management. It is well recognised that demonstration of intermediacy based on several characters makes the identification of hybrids more certain than that based on just a single character, and that differences among the hybrids and the parental populations should be analysed for variations due to the additive genetic (A) and the non-additive genetic (NA) factors separately. In the present paper (i) it is pointed out that sometimes the assumption that covariance matrices of the hybrids and the parental populations are equal, may not be valid, and (ii) a multivariate method of testing hypotheses analysing differences between the hybrids and the parents, qualified by the A and the NA factors, when covariance matrices are not equal, is submitted. 相似文献
14.
R S Corruccini 《American journal of physical anthropology》1972,37(3):373-388
Genetic evidence from the skeletal remains of three Pueblo populations, those of Hawikuh, Pueblo Bonito, and Puye, does not indicate that important racial differences arose between these groups, either through genetic influx or selection. They formed a unified group when compared with several non-Southwestern skeletal samples. Significant genetic variability, however, exists between each pair of populations, contradicting the idea of their belonging to a unified Pueblo Indian gene pool or to a fixed physical type. No differences can be detected between prehistoric populations and those contacted by Europeans. Genetic drift, supplemented by the action of non-random cultural associations and disease, provides a better explanation of the biological variability of Pueblo Indian populations than gene flow or directional selection. 相似文献
15.
Louis Anthony Cox Jr. 《人类与生态风险评估》1996,2(1):150-174
The traditional q1 * methodology for constructing upper confidence limits (UCLs) for the low-dose slopes of quantal dose-response functions has two limitations: (i) it is based on an asymptotic statistical result that has been shown via Monte Carlo simulation not to hold in practice for small, real bioassay experiments (Portier and Hoel, 1983); and (ii) it assumes that the multistage model (which represents cumulative hazard as a polynomial function of dose) is correct. This paper presents an uncertainty analysis approach for fitting dose-response functions to data that does not require specific parametric assumptions or depend on asymptotic results. It has the advantage that the resulting estimates of the dose-response function (and uncertainties about it) no longer depend on the validity of an assumed parametric family nor on the accuracy of the asymptotic approximation. The method derives posterior densities for the true response rates in the dose groups, rather than deriving posterior densities for model parameters, as in other Bayesian approaches (Sielken, 1991), or resampling the observed data points, as in the bootstrap and other resampling methods. It does so by conditioning constrained maximum-entropy priors on the observed data. Monte Carlo sampling of the posterior (constrained, conditioned) probability distributions generate values of response probabilities that might be observed if the experiment were repeated with very large sample sizes. A dose-response curve is fit to each such simulated dataset. If no parametric model has been specified, then a generalized representation (e.g., a power-series or orthonormal polynomial expansion) of the unknown dose-response function is fit to each simulated dataset using “model-free” methods. The simulation-based frequency distribution of all the dose-response curves fit to the simulated datasets yields a posterior distribution function for the low-dose slope of the dose-response curve. An upper confidence limit on the low-dose slope is obtained directly from this posterior distribution. This “Data Cube” procedure is illustrated with a real dataset for benzene, and is seen to produce more policy-relevant insights than does the traditional q1 * methodology. For example, it shows how far apart are the 90%, 95%, and 99% limits and reveals how uncertainty about total and incremental risk vary with dose level (typically being dominated at low doses by uncertainty about the response of the control group, and being dominated at high doses by sampling variability). Strengths and limitations of the Data Cube approach are summarized, and potential decision-analytic applications to making better informed risk management decisions are briefly discussed. 相似文献
16.
Analysis of contingency tables under cluster sampling 总被引:2,自引:0,他引:2
17.
Weighted averaging,logistic regression and the Gaussian response model 总被引:18,自引:0,他引:18
The indicator value and ecological amplitude of a species with respect to a quantitative environmental variable can be estimated from data on species occurrence and environment. A simple weighted averaging (WA) method for estimating these parameters is compared by simulation with the more elaborate method of Gaussian logistic regression (GLR), a form of the generalized linear model which fits a Gaussian-like species response curve to presence-absence data. The indicator value and the ecological amplitude are expressed by two parameters of this curve, termed the optimum and the tolerance, respectively. When a species is rare and has a narrow ecological amplitude — or when the distribution of quadrats along the environmental variable is reasonably even over the species' range, and the number of quadrats is small — then WA is shown to approach GLR in efficiency. Otherwise WA may give misleading results. GLR is therefore preferred as a practical method for summarizing species' distributions along environmental gradients. Formulas are given to calculate species optima and tolerances (with their standard errors), and a confidence interval for the optimum from the GLR output of standard statistical packages.Nomenclature follows Heukels-van der Meijden (1983).We would like to thank Drs I. C. Prentice, N. J. M. Gremmen and J. A. Hoekstra for comments on the paper. We are grateful to Ir. Th. A. de Boer (CABO, Wageningen) for permission to use the data of the first example. 相似文献
18.
Erroneous behaviour of MixSIR, a recently published Bayesian isotope mixing model: a discussion of Moore & Semmens (2008) 总被引:1,自引:0,他引:1
The application of Bayesian methods to stable isotopic mixing problems, including inference of diet has the potential to revolutionise ecological research. Using simulated data we show that a recently published model MixSIR fails to correctly identify the true underlying dietary proportions more than 50% of the time and fails with increasing frequency as additional unquantified error is added. While the source of the fundamental failure remains elusive, mitigating solutions are suggested for dealing with additional unquantified variation. Moreover, MixSIR uses a formulation for a prior distribution that results in an opaque and unintuitive covariance structure. 相似文献
19.
20.
Daniel H. Freeman Jean L. Freeman Gary G. Koch 《Biometrical journal. Biometrische Zeitschrift》1978,20(1):29-40
WEIBULL models are fitted to synthetic life table data by applying weighted least squares analysis to log log functions which are constructed from appropriate underlying contingency tables. As such, the resulting estimates and test statistics are based on the linearized minimum modified X21-criterion and thus have satisfactory properties in moderately large samples. The basic methodology is illustrated in terms of an example which is bivariate in the sense of involving two simultaneous, but non-competing, vital events. For this situation, the estimation of WEIBULL model parameters is described for both marginal as well as certain conditional distributions either individually or jointly. 相似文献