首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The role of the propensity score in estimating dose-response functions   总被引:11,自引:0,他引:11  
Imbens  GW 《Biometrika》2000,87(3):706-710
  相似文献   

2.
3.

Background  

Experimentally determined protein structures may contain errors and require validation. Conformational criteria based on the Ramachandran plot are mainly used to distinguish bet ween distorted and adequately refined models. While the readily available criteria are sufficient to detect totally wrong structures, establishing the more subtle differences between plausible structures remains more challenging.  相似文献   

4.
Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified.  相似文献   

5.
Published justifications for weighting characters in parsimony analyses vary tremendously. Some authors argue for weighting a posteriori, some for a priori, and especially those authors that rely on a falsificationist approach to systematics argue for non-weighting. To find a decision, while following the falsificationist approach, one first has to investigate the necessary conditions for the possibility of phylogenetic research to establish an empirical science sensu Popper. A concept of phylogenetic homology together with the criterion of identity is proposed, which refers to the genealogical relations between individual organisms. From this concept a differentiation of the terms character and character state is proposed, defining each character as a single epistemological argument for the reconstruction of a unique transformation event. Synapomorphy is distinguished from homology by referring to the relationship between species instead of individual organisms, thus the set of all synapomorphies constitutes a subset of the set of all homologies. By examining the structure of characteristics during character analysis and hypothesizing specific types of transformations responsible for having caused them, a specific degree of severity is assigned to each identity test. It thus provides a specific degree of corroboration for every hypothesis that successfully passed this test. Since the congruence criterion tests hypotheses of synapomorphy against each other on grounds of their degree of corroboration gained from the identity test, these different degrees of corroboration determine the specific weights given to characters and character state transformations before the cladistic analysis. This provides a reasonable justification for an a priori weighting scheme within a falsificationist approach to phylogeny. It also demonstrates the indispensable necessity of its application.  相似文献   

6.

Context

Establishing the long-term benefit of therapy in chronic diseases has been challenging. Long-term studies require non-randomized designs and, thus, are often confounded by biases. For example, although disease-modifying therapy in MS has a convincing benefit on several short-term outcome-measures in randomized trials, its impact on long-term function remains uncertain.

Objective

Data from the 16-year Long-Term Follow-up study of interferon-beta-1b is used to assess the relationship between drug-exposure and long-term disability in MS patients.

Design/Setting

To mitigate the bias of outcome-dependent exposure variation in non-randomized long-term studies, drug-exposure was measured as the medication-possession-ratio, adjusted up or down according to multiple different weighting-schemes based on MS severity and MS duration at treatment initiation. A recursive-partitioning algorithm assessed whether exposure (using any weighing scheme) affected long-term outcome. The optimal cut-point that was used to define “high” or “low” exposure-groups was chosen by the algorithm. Subsequent to verification of an exposure-impact that included all predictor variables, the two groups were compared using a weighted propensity-stratified analysis in order to mitigate any treatment-selection bias that may have been present. Finally, multiple sensitivity-analyses were undertaken using different definitions of long-term outcome and different assumptions about the data.

Main Outcome Measure

Long-Term Disability.

Results

In these analyses, the same weighting-scheme was consistently selected by the recursive-partitioning algorithm. This scheme reduced (down-weighted) the effectiveness of drug exposure as either disease duration or disability at treatment-onset increased. Applying this scheme and using propensity-stratification to further mitigate bias, high-exposure had a consistently better clinical outcome compared to low-exposure (Cox proportional hazard ratio = 0.30–0.42; p<0.0001).

Conclusions

Early initiation and sustained use of interferon-beta-1b has a beneficial impact on long-term outcome in MS. Our analysis strategy provides a methodological framework for bias-mitigation in the analysis of non-randomized clinical data.

Trial Registration

Clinicaltrials.gov NCT00206635  相似文献   

7.
8.

Background

Long-acting beta-agonists were one of the first-choice bronchodilator agents for stable chronic obstructive pulmonary disease. But the impact of long-acting beta-agonists on mortality was not well investigated.

Methods

National Emphysema Treatment Trial provided the data. Severe and very severe stable chronic obstructive pulmonary disease patients who were eligible for volume reduction surgery were recruited at 17 clinical centers in United States during 1988–2002. We used the 6–10 year follow-up data of patients randomized to non-surgery treatment. Hazard ratios for death by long-acting beta-agonists were estimated by three models using Cox proportional hazard analysis and propensity score matching were measured.

Results

The pre-matching cohort was comprised of 591 patients (50.6% were administered long-acting beta-agonists. Age: 66.6 ± 5.3 year old. Female: 35.4%. Forced expiratory volume in one second (%predicted): 26.7 ± 7.1%. Mortality during follow-up: 70.2%). Hazard ratio using a multivariate Cox model in the pre-matching cohort was 0.77 (P = 0.010). Propensity score matching was conducted (C-statics: 0.62. No parameter differed between cohorts). The propensity-matched cohort was comprised of 492 patients (50.0% were administered long-acting beta-agonists. Age: 66.8 ± 5.1 year old. Female: 34.8%. Forced expiratory volume in one second (%predicted) 26.5 ± 6.8%. Mortality during follow-up: 69.1%). Hazard ratio using a univariate Cox model in the propensity-matched cohort was 0.77 (P = 0.017). Hazard ratio using a multivariate Cox model in the propensity-matched cohort was 0.76 (P = 0.011).

Conclusions

Long-acting beta-agonists reduce mortality of severe and very severe chronic obstructive pulmonary disease patients.  相似文献   

9.
Trimming a DNA strand into a precisely determined fragment can be carried out efficiently by an improved method involving a site-specific trim-primer and a single-stranded DNA template which is generated from a multifunctional vector, pTZ18R, and linearized by using an Eco RI-pTZ18R splinter. A complementary DNA strand is synthesized by DNA polymerase I (Klenow), and the 3'-end of the template upstream from the annealed primer is trimmed by subsequent T4 DNA polymerase reaction. An ATG translation initiator codon or a termination codon can be incorporated into the trim-primer, providing versatility to this single-stranded DNA-initiated gene trimming method that can be applied to subcloning and expression of any DNA fragment with known terminal sequences.  相似文献   

10.
11.
12.
13.
The occurrence of a G-triplex folding intermediate of thrombin binding aptamer (TBA) has been recently predicted by metadynamics calculations, and experimentally supported by Nuclear Magnetic Resonance (NMR), Circular Dichroism (CD) and Differential Scanning Calorimetry (DSC) data collected on a 3′ end TBA-truncated 11-mer oligonucleotide (11-mer-3′-t-TBA). Here we present the solution structure of 11-mer-3′-t-TBA in the presence of potassium ions. This structure is the first experimental example of a G-triplex folding, where a network of Hoogsteen-like hydrogen bonds stabilizes six guanines to form two G:G:G triad planes. The G-triplex folding of 11-mer-3′-t-TBA is stabilized by the potassium ion and destabilized by increasing the temperature. The superimposition of the experimental structure with that predicted by metadynamics shows a great similarity, with only significant differences involving two loops. These new structural data show that 11-mer-3′-t-TBA assumes a G-triplex DNA conformation as its stable form, reinforcing the idea that G-triplex folding intermediates may occur in vivo in human guanine-rich sequences. NMR and CD screening of eight different constructs obtained by removing from one to four bases at either the 3′ and the 5′ ends show that only the 11-mer-3′-t-TBA yields a relatively stable G-triplex.  相似文献   

14.
Seaman SR  White IR  Copas AJ  Li L 《Biometrics》2012,68(1):129-137
Two approaches commonly used to deal with missing data are multiple imputation (MI) and inverse-probability weighting (IPW). IPW is also used to adjust for unequal sampling fractions. MI is generally more efficient than IPW but more complex. Whereas IPW requires only a model for the probability that an individual has complete data (a univariate outcome), MI needs a model for the joint distribution of the missing data (a multivariate outcome) given the observed data. Inadequacies in either model may lead to important bias if large amounts of data are missing. A third approach combines MI and IPW to give a doubly robust estimator. A fourth approach (IPW/MI) combines MI and IPW but, unlike doubly robust methods, imputes only isolated missing values and uses weights to account for remaining larger blocks of unimputed missing data, such as would arise, e.g., in a cohort study subject to sample attrition, and/or unequal sampling fractions. In this article, we examine the performance, in terms of bias and efficiency, of IPW/MI relative to MI and IPW alone and investigate whether the Rubin's rules variance estimator is valid for IPW/MI. We prove that the Rubin's rules variance estimator is valid for IPW/MI for linear regression with an imputed outcome, we present simulations supporting the use of this variance estimator in more general settings, and we demonstrate that IPW/MI can have advantages over alternatives. IPW/MI is applied to data from the National Child Development Study.  相似文献   

15.
BAC trimming: minimizing clone overlaps   总被引:2,自引:0,他引:2  
Bacterial vectors containing large inserts of genomic DNA are now the standard substrates for large-scale genomic sequencing. Long overlaps between some clones lead to considerable redundant effort. A method for deleting defined regions from bacterial artificial chromosome (BAC) inserts, using homologous recombination, was applied to minimize the overlap between successive BAC clones. This procedure, called trimming, was carried out in the recA(-) BAC host. We have precisely deleted up to 70 kb of DNA from BACs that were to be sequenced. This method requires minimal prior characterization of the clones: collections of BAC end sequences or STS-based maps will accelerate the process. BAC trimming will be useful in both small and large genome sequencing projects and will be of particular utility for gap closure in finishing phases.  相似文献   

16.
Adams  Robert P. 《Brittonia》1975,27(4):305-316
Brittonia - Terpenoid data from seven species ofJuniperus were used to examine:1) the effect of using different character weights upon the same set of OTU’s; 2) the effect of the organization...  相似文献   

17.
18.
19.
MOTIVATION: Most sequence comparison methods assume that the data being compared are trustworthy, but this is not the case with raw DNA sequences obtained from automatic sequencing machines. Nevertheless, sequence comparisons need to be done on them in order to remove vector splice sites and contaminants. This step is necessary before other genomic data processing stages can be carried out, such as fragment assembly or EST clustering. A specialized tool is therefore needed to solve this apparent dilemma. RESULTS: We have designed and implemented a program that specifically addresses the problem. This program, called LUCY, has been in use since 1998 at The Institute for Genomic Research (TIGR). During this period, many rounds of experience-driven modifications were made to LUCY to improve its accuracy and its ability to deal with extremely difficult input cases. We believe we have finally obtained a useful program which strikes a delicate balance among the many issues involved in the raw sequence cleaning problem, and we wish to share it with the research community. AVAILABILITY: LUCY is available directly from TIGR (http://www.tigr.org/softlab). Academic users can download LUCY after accepting a free academic use license. Business users may need to pay a license fee to use LUCY for commercial purposes. CONTACT: Questions regarding the quality assessment module of LUCY should be directed to Michael Holmes (mholmes@tigr.org). Questions regarding other aspects of LUCY should be directed to Hui-Hsien Chou (hhchou@iastate.edu).  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号