首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
J M Robins  S D Mark  W K Newey 《Biometrics》1992,48(2):479-495
In order to estimate the causal effects of one or more exposures or treatments on an outcome of interest, one has to account for the effect of "confounding factors" which both covary with the exposures or treatments and are independent predictors of the outcome. In this paper we present regression methods which, in contrast to standard methods, adjust for the confounding effect of multiple continuous or discrete covariates by modelling the conditional expectation of the exposures or treatments given the confounders. In the special case of a univariate dichotomous exposure or treatment, this conditional expectation is identical to what Rosenbaum and Rubin have called the propensity score. They have also proposed methods to estimate causal effects by modelling the propensity score. Our methods generalize those of Rosenbaum and Rubin in several ways. First, our approach straightforwardly allows for multivariate exposures or treatments, each of which may be continuous, ordinal, or discrete. Second, even in the case of a single dichotomous exposure, our approach does not require subclassification or matching on the propensity score so that the potential for "residual confounding," i.e., bias, due to incomplete matching is avoided. Third, our approach allows a rather general formalization of the idea that it is better to use the "estimated propensity score" than the true propensity score even when the true score is known. The additional power of our approach derives from the fact that we assume the causal effects of the exposures or treatments can be described by the parametric component of a semiparametric regression model. To illustrate our methods, we reanalyze the effect of current cigarette smoking on the level of forced expiratory volume in one second in a cohort of 2,713 adult white males. We compare the results with those obtained using standard methods.  相似文献   

2.

Background

Clinical examination of trachoma is used to justify intervention in trachoma-endemic regions. Currently, field graders are certified by determining their concordance with experienced graders using the kappa statistic. Unfortunately, trachoma grading can be highly variable and there are cases where even expert graders disagree (borderline/marginal cases). Prior work has shown that inclusion of borderline cases tends to reduce apparent agreement, as measured by kappa. Here, we confirm those results and assess performance of trainees on these borderline cases by calculating their reliability error, a measure derived from the decomposition of the Brier score.

Methods and Findings

We trained 18 field graders using 200 conjunctival photographs from a community-randomized trial in Niger and assessed inter-grader agreement using kappa as well as reliability error. Three experienced graders scored each case for the presence or absence of trachomatous inflammation - follicular (TF) and trachomatous inflammation - intense (TI). A consensus grade for each case was defined as the one given by a majority of experienced graders. We classified cases into a unanimous subset if all 3 experienced graders gave the same grade. For both TF and TI grades, the mean kappa for trainees was higher on the unanimous subset; inclusion of borderline cases reduced apparent agreement by 15.7% for TF and 12.4% for TI. When we assessed the breakdown of the reliability error, we found that our trainees tended to over-call TF grades and under-call TI grades, especially in borderline cases.

Conclusions

The kappa statistic is widely used for certifying trachoma field graders. Exclusion of borderline cases, which even experienced graders disagree on, increases apparent agreement with the kappa statistic. Graders may agree less when exposed to the full spectrum of disease. Reliability error allows for the assessment of these borderline cases and can be used to refine an individual trainee''s grading.  相似文献   

3.
Agreement between raters for binary outcome data is typically assessed using the kappa coefficient. There has been considerable recent work extending logistic regression to provide summary estimates of interrater agreement adjusted for covariates predictive of the marginal probability of classification by each rater. We propose an estimating equations approach which can also be used to identify covariates predictive of kappa. Models may include an arbitrary and variable number of raters per subject and yet do not require any stringent parametric assumptions. Examples used to illustrate this procedure include an investigation of factors affecting agreement between primary and proxy respondents from a case‐control study and a study of the effects of gender and zygosity on twin concordance for smoking history.  相似文献   

4.

Introduction

Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model.

Methods

A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations.

Results

In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified.

Conclusions

Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.  相似文献   

5.
For a two-component system, a derivative that specifies the concentration-dependence of one chemical potential can be calculated from the corresponding derivative of the other chemical potential by applying the Gibbs-Duhem Equation. To extend the practical utility of this binary thermodynamic linkage to systems having any number of components, we present a derivation based on a previously unrecognized recursive relationship. Thus, for each independently variable component, kappa, any derivative of its chemical potential, mukappa, with respect to one of the mole ratios {mkappa identical with nkappa/nomega} is related to as a characteristic series of progressively higher order derivatives of muomega for a single "probe" component, omega, with respect to certain of the {mkappa}. For aqueous solutions in which omega is solvent water and one or more of the solutes (kappa) is dilute, under typical conditions each sum of terms expressing a derivative of mukappa consists of at most a few numerically significant contributions, which can be quantified, or at least estimated, by analyzing osmometric data to determine how the single chemical potential muomega depends on the {mkappa} without neglecting any significant contributions from the other components. Expressions derived here also will provide explicit criteria for testing various approximations built into alternative analytic strategies for quantifying derivatives that specify the {mkappa} dependences of mukappa for selected components. Certain quotients of these derivatives are of particular interest in so far as they gauge important thermodynamic effects due to "preferential interactions".  相似文献   

6.
Tom Greene 《Biometrics》2001,57(2):354-360
Treatments intended to slow the progression of chronic diseases are often hypothesized to reduce the rate of further injury to a biological system without improving the current level of functioning. In this situation, the treatment effect may be negligible for patients whose disease would have been stable without the treatment but would be expected to be an increasing function of the progression rate in patients with worsening disease. This article considers a variation of the Laird Ware mixed effects model in which the effect of the treatment on the slope of a longitudinal outcome is assumed to be proportional to the progression rate for patients with progressive disease. Inference based on maximum likelihood and a generalized estimating equations procedure is considered. Under the proportional effect assumption, the precision of the estimated treatment effect can be increased by incorporating the functional relationship between the model parameters and the variance of the outcome variable, particularly when the magnitude of the mean slope of the outcome is small compared with the standard deviation of the slopes. An example from a study of chronic renal disease is used to illustrate insights provided by the proportional effect model that may be overlooked with models assuming additive treatment effects.  相似文献   

7.
Evolution of a multigene family of V kappa germ line genes   总被引:10,自引:2,他引:8       下载免费PDF全文
We have isolated a series of related V kappa germ line genes from a BALB/c sperm DNA library. DNA sequence analysis of four members of this V kappa 24 multigene family implies that three V kappa genes are functional whereas the fourth one (psi V kappa 24) is a pseudogene. The prototype gene (V kappa 24) encodes the variable region gene segment expressed in an immune response against phosphorylcholine. The other two functional genes (V kappa 24A and V kappa 24B) may be expressed against streptococcal group A carbohydrate. The time of divergence of the four genes was estimated by the rate of synonymous nucleotide changes. This implies that an ancestral gene has duplicated approximately 33-35 million years ago and a subsequent gene duplication event has occurred approximately 23 million years ago.  相似文献   

8.
An IgA1 half-molecule, which is composed of a deleted alpha1 chain linked with a disulfide bond to an intact kappa chain, was detected in a patient (Cha). The molecular weights of the paraprotein and the isolated alpha1 chain were estimated to be 75 000 and 53 000, respectively. Identification of tyrosine as the C-terminal amino acid and the presence of idiotypic determinants in the abnormal alpha1 chain indicated that the molecule would have an intact N-terminal variable region and a C-terminal region. Furthermore, no cleavage of the abnormal protein into Fab and Fc by proteolytic enzyme isolated from Neisseria gonorrhoeae suggested the absence of a "hinge" region in the abnormal alpha1 chain.  相似文献   

9.
The reliability of binary assessments is often measured by the proportion of agreement above chance, as estimated by the kappa statistic. In this paper, we develop a model to estimate inter-rater and intra-rater reliability when each of the two observers has the opportunity to obtain a pair of replicate measurements on each subject. The model is analogous to the nested beta-binomial model proposed by Rosner (1989, 1992). We show that the gain in precision obtained from increasing the number of measurements per rater from one to two may allow fewer subjects to be included in the study with no net loss in efficiency for estimating the inter-rater reliability.  相似文献   

10.
11.
In non-randomized studies, the assessment of a causal effect of treatment or exposure on outcome is hampered by possible confounding. Applying multiple regression models including the effects of treatment and covariates on outcome is the well-known classical approach to adjust for confounding. In recent years other approaches have been promoted. One of them is based on the propensity score and considers the effect of possible confounders on treatment as a relevant criterion for adjustment. Another proposal is based on using an instrumental variable. Here inference relies on a factor, the instrument, which affects treatment but is thought to be otherwise unrelated to outcome, so that it mimics randomization. Each of these approaches can basically be interpreted as a simple reweighting scheme, designed to address confounding. The procedures will be compared with respect to their fundamental properties, namely, which bias they aim to eliminate, which effect they aim to estimate, and which parameter is modelled. We will expand our overview of methods for analysis of non-randomized studies to methods for analysis of randomized controlled trials and show that analyses of both study types may target different effects and different parameters. The considerations will be illustrated using a breast cancer study with a so-called Comprehensive Cohort Study design, including a randomized controlled trial and a non-randomized study in the same patient population as sub-cohorts. This design offers ideal opportunities to discuss and illustrate the properties of the different approaches.  相似文献   

12.
The analysis of family-study data sometimes focuses on whether a dichotomous trait tends to cluster in families. For traits with variable age-at-onset, it may be of interest to investigate whether age-at-onset itself also exhibits familial clustering. A complication in such investigations is that censoring by age-at-ascertainment can induce artifactual familial correlation in the age-at-onset of affected members. A further complication can be that sample inclusion criteria involve the affection status of family members. The purpose here is to present an approach to testing for correlation that is not confounded by censoring by age-at-ascertainment and may be applied with a broad range of inclusion criteria. The approach involves regression statistics in which subjects's covariate terms are chosen to reflect age-at-onset information from the subjects's affected family members. The results of analyses of data from a family-study of panic disorder illustrate the approach.  相似文献   

13.
OBJECTIVE: To determine the interobserver reliability of tympanograms obtained with the MicroTymp, a portable tympanometer. SETTING: Family medicine teaching unit in a tertiary care hospital. PATIENTS: Thirty-three patients who presented to the ear, nose and throat clinic in August 1990 for an ear problem. INTERVENTION: Three residents in family medicine independently attempted to record with the MicroTymp one tympanogram for the 66 ears. We excluded the results for seven ears for which tympanograms could not be obtained. MAIN OUTCOME MEASURE: Using objective criteria, two family physicians and two residents in family medicine independently classified the 177 tympanograms into five categories (normal, possible effusion, possible perforation, possible tympano-ossicular dysfunction and unclassifiable). Reliability was estimated by means of the kappa (kappa) coefficient on 161 tympanograms from 59 ears for which the interpretation of the three tympanograms agreed. MAIN RESULTS: The interpretation of the three tympanograms agreed for 34 of the 59 ears (0.58) (kappa = 0.52, 95% confidence limits 0.45 and 0.59). There was no significant difference in interobserver reliability between pairs of observers or between symptomatic and asymptomatic ears. CONCLUSIONS: The interobserver reliability of the MicroTymp is moderate. The tympanograms obtained with the instrument should be interpreted in the context of the clinical findings.  相似文献   

14.
The scientific and cost-effectiveness criteria introduced in this paper can be applied to published datasets and current and proposed batteries of short-term tests. The reports in the current volume will provide a wealth of additional material for such evaluations, but more systematically obtained information will be necessary to assess both the internal and external validity of these tests. Individual tests and batteries of tests should be standardized, employ positive controls, generate results capable of quantitative analyses that may make dichotomous classification as "positive" and "negative" obsolete, be interpreted in light of mechanisms of action, and be cost-effective on a grand scale. For regulatory purposes our long-term goal should be to replace the whole animal lifetime bioassay with an appropriate and cost-effective set of short-term tests.  相似文献   

15.
W. van-der-Loo  B. Verdoodt 《Genetics》1992,132(4):1105-1117
Population studies at the b-locus of the "constant" regions of the rabbit immunoglobulin kappa 1 light chain (c kappa 1) revealed patterns of gene diversity resembling those that mark the peculiar nature of the major histocompatibility complex, such as large number of alleles, high heterozygosity levels, consistent excess of heterozygous individuals and long allele coalescence times. This paper documents the evolutionary patterns at the b-locus as inferred from DNA sequence comparisons. Among alleles, synonymous substitutions outnumbered expectations for neutral alleles by an order of magnitude. They were distributed randomly throughout the c kappa 1 coding region while interallelic amino acid differences did cluster into segments overlapping with the regions exposed to the solvent. Within these regions, acceptance rates of mutation at amino acid replacement sites were even higher than those at synonymous sites (dr/ds = 1.6-3.0), while in the intervals between these regions the opposite was found (dr/ds approximately 0.3). Under the assumption that allelic variation is adaptive at the molecular surface, the divergence patterns at the b-locus are therefore very similar to those reported for the major histocompatibility complex. An analysis at the quasi silent bas-locus (c kappa 2), which is linked to the b-locus, and comparisons among genes of the "variable" region of the kappa 1 light chains (v kappa 1), revealed patterns of divergence which differed markedly from those observed at the c kappa 1 constant regions. It is suggested that allelic variability at immunoglobulin constant regions can be due to mechanisms similar to those enhancing diversity at histocompatibility loci.  相似文献   

16.
Analysis of categorical outcomes in a longitudinal study has been an important statistical issue. Continuous outcome in a similar study design is commonly handled by the mixed effects model. The longitudinal binary or Poisson-like outcome analysis is often handled by the generalized estimation equation (GEE) method. Neither method is appropriate for analyzing a multinomial outcome in a longitudinal study, although the cross-sectional multinomial outcome is often analyzed by generalized linear models. One reason that these methods are not used is that the correlation structure of two multinomial variables can not be easily specified. In addition, methods that rely upon GEE or mixed effects models are unsuitable in instances when the focus of a longitudinal study is on the rate of moving from one category to another. In this research, a longitudinal model that has three categories in the outcome variable will be examined. A continuous-time Markov chain model will be used to examine the transition from one category to another. This model permits an unbalanced number of measurements collected on individuals and an uneven duration between pairs of consecutive measurements. In this study, the explicit expression for the transition probability is derived that provides an algebraic form of the likelihood function and hence allows the implementation of the maximum likelihood method. Using this approach, the instantaneous transition rate that is assumed to be a function of the linear combination of independent variables can be estimated. For a comparison between two groups, the odds ratios of occurrence at a particular category and their confidence intervals can be calculated. Empirical studies will be performed to compare the goodness of fit of the proposed method with other available methods. An example will also be used to demonstrate the application of this method.  相似文献   

17.
Mendelian Randomisation (MR) is a powerful tool in epidemiology that can be used to estimate the causal effect of an exposure on an outcome in the presence of unobserved confounding, by utilising genetic variants as instrumental variables (IVs) for the exposure. The effect estimates obtained from MR studies are often interpreted as the lifetime effect of the exposure in question. However, the causal effects of some exposures are thought to vary throughout an individual’s lifetime with periods during which an exposure has a greater effect on a particular outcome. Multivariable MR (MVMR) is an extension of MR that allows for multiple, potentially highly related, exposures to be included in an MR estimation. MVMR estimates the direct effect of each exposure on the outcome conditional on all the other exposures included in the estimation. We explore the use of MVMR to estimate the direct effect of a single exposure at different time points in an individual’s lifetime on an outcome. We use simulations to illustrate the interpretation of the results from such analyses and the key assumptions required. We show that causal effects at different time periods can be estimated through MVMR when the association between the genetic variants used as instruments and the exposure measured at those time periods varies. However, this estimation will not necessarily identify exact time periods over which an exposure has the most effect on the outcome. Prior knowledge regarding the biological basis of exposure trajectories can help interpretation. We illustrate the method through estimation of the causal effects of childhood and adult BMI on C-Reactive protein and smoking behaviour.  相似文献   

18.
D. A. Roff 《Genetics》1994,136(1):395-401
Many traits vary in a dichotomous manner, although the underlying genetic determination is polygenic. The genetic basis of such dimorphic traits can be analyzed using the threshold model, in which it is assumed that there is a continuously distributed underlying character and the phenotype is determined by whether the character is above or below a threshold. Threshold traits frequently vary with environmental variables such as photoperiod, temperature and density. This effect can be accounted for using a threshold model in which (1) there is a critical value of the environmental variable at which a genotype switches to the alternate morph, and (2) switch (threshold) points are normally distributed in the population. I term this the environmental threshold (ET) model. I show that the ET model predicts that across environments differing in only one factor the genetic correlation will be 1. This prediction is supported by data from three wing dimorphic insects. Evidence is presented that the genetic correlation between environments differing in two components (temperature and photoperiod) is less than 1.  相似文献   

19.

Background

Genomic selection can be implemented by a multi-step procedure, which requires a response variable and a statistical method. For pure-bred pigs, it was hypothesised that deregressed estimated breeding values (EBV) with the parent average removed as the response variable generate higher reliabilities of genomic breeding values than EBV, and that the normal, thick-tailed and mixture-distribution models yield similar reliabilities.

Methods

Reliabilities of genomic breeding values were estimated with EBV and deregressed EBV as response variables and under the three statistical methods, genomic BLUP, Bayesian Lasso and MIXTURE. The methods were examined by splitting data into a reference data set of 1375 genotyped animals that were performance tested before October 2008, and 536 genotyped validation animals that were performance tested after October 2008. The traits examined were daily gain and feed conversion ratio.

Results

Using deregressed EBV as the response variable yielded 18 to 39% higher reliabilities of the genomic breeding values than using EBV as the response variable. For daily gain, the increase in reliability due to deregression was significant and approximately 35%, whereas for feed conversion ratio it ranged between 18 and 39% and was significant only when MIXTURE was used. Genomic BLUP, Bayesian Lasso and MIXTURE had similar reliabilities.

Conclusions

Deregressed EBV is the preferred response variable, whereas the choice of statistical method is less critical for pure-bred pigs. The increase of 18 to 39% in reliability is worthwhile, since the reliabilities of the genomic breeding values directly affect the returns from genomic selection.  相似文献   

20.
An attempt has been made to establish axiomatically the principles of biological classification. It is shown that if phylogenetic classification is based on the notion of dichotomous origin of new taxa implied in Hennig's theory of cladism then the outcome must be a hierarchy in the form of a dichotomous dendrogram. Since the rules of traditional classification do not lead to this type of "phylogenetic tree" it is concluded that the conventions of ordinary systematics do not permit the erection of a "natural system".  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号