首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 27 毫秒
1.
Suppose we are interested in the effect of a treatment in a clinical trial. The efficiency of inference may be limited due to small sample size. However, external control data are often available from historical studies. Motivated by an application to Helicobacter pylori infection, we show how to borrow strength from such data to improve efficiency of inference in the clinical trial. Under an exchangeability assumption about the potential outcome mean, we show that the semiparametric efficiency bound for estimating the average treatment effect can be reduced by incorporating both the clinical trial data and external controls. We then derive a doubly robust and locally efficient estimator. The improvement in efficiency is prominent especially when the external control data set has a large sample size and small variability. Our method allows for a relaxed overlap assumption, and we illustrate with the case where the clinical trial only contains a treated group. We also develop doubly robust and locally efficient approaches that extrapolate the causal effect in the clinical trial to the external population and the overall population. Our results also offer a meaningful implication for trial design and data collection. We evaluate the finite-sample performance of the proposed estimators via simulation. In the Helicobacter pylori infection application, our approach shows that the combination treatment has potential efficacy advantages over the triple therapy.  相似文献   

2.
Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this article, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular, conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen.  相似文献   

3.
BACKGROUND: The recent development of semiautomated techniques for staining and analyzing flow cytometry samples has presented new challenges. Quality control and quality assessment are critical when developing new high throughput technologies and their associated information services. Our experience suggests that significant bottlenecks remain in the development of high throughput flow cytometry methods for data analysis and display. Especially, data quality control and quality assessment are crucial steps in processing and analyzing high throughput flow cytometry data. METHODS: We propose a variety of graphical exploratory data analytic tools for exploring ungated flow cytometry data. We have implemented a number of specialized functions and methods in the Bioconductor package rflowcyt. We demonstrate the use of these approaches by investigating two independent sets of high throughput flow cytometry data. RESULTS: We found that graphical representations can reveal substantial nonbiological differences in samples. Empirical Cumulative Distribution Function and summary scatterplots were especially useful in the rapid identification of problems not identified by manual review. CONCLUSIONS: Graphical exploratory data analytic tools are quick and useful means of assessing data quality. We propose that the described visualizations should be used as quality assessment tools and where possible, be used for quality control.  相似文献   

4.
Zigler CM  Belin TR 《Biometrics》2012,68(3):922-932
Summary The literature on potential outcomes has shown that traditional methods for characterizing surrogate endpoints in clinical trials based only on observed quantities can fail to capture causal relationships between treatments, surrogates, and outcomes. Building on the potential-outcomes formulation of a principal surrogate, we introduce a Bayesian method to estimate the causal effect predictiveness (CEP) surface and quantify a candidate surrogate's utility for reliably predicting clinical outcomes. In considering the full joint distribution of all potentially observable quantities, our Bayesian approach has the following features. First, our approach illuminates implicit assumptions embedded in previously-used estimation strategies that have been shown to result in poor performance. Second, our approach provides tools for making explicit and scientifically-interpretable assumptions regarding associations about which observed data are not informative. Through simulations based on an HIV vaccine trial, we found that the Bayesian approach can produce estimates of the CEP surface with improved performance compared to previous methods. Third, our approach can extend principal-surrogate estimation beyond the previously considered setting of a vaccine trial where the candidate surrogate is constant in one arm of the study. We illustrate this extension through an application to an AIDS therapy trial where the candidate surrogate varies in both treatment arms.  相似文献   

5.
转录本组装是基于第二代测序技术研究转录组的关键环节,其质量好坏直接影响到下游结果的可靠性,也是目前的研究热点与难点。转录本组装方法可以分为Genome-guided和de novo两类,它们在理论基础与算法实现方面各有优劣。转录本组装质量的高低依赖于PCR扩增错误率、第二代测序技术准确率、组装算法和参考基因组完整性等方面,而现有的算法还无法完全处理由这些因素带来的影响。本文从转录本组装方法与软件、影响组装质量的因素和对组装质量的评价指标等方面进行讨论,以期能指导纯生物学家对分析软件的选择。  相似文献   

6.
Taylor L  Zhou XH 《Biometrics》2009,65(1):88-95
Summary .  Randomized clinical trials are a powerful tool for investigating causal treatment effects, but in human trials there are oftentimes problems of noncompliance which standard analyses, such as the intention-to-treat or as-treated analysis, either ignore or incorporate in such a way that the resulting estimand is no longer a causal effect. One alternative to these analyses is the complier average causal effect (CACE) which estimates the average causal treatment effect among a subpopulation that would comply under any treatment assigned. We focus on the setting of a randomized clinical trial with crossover treatment noncompliance (e.g., control subjects could receive the intervention and intervention subjects could receive the control) and outcome nonresponse. In this article, we develop estimators for the CACE using multiple imputation methods, which have been successfully applied to a wide variety of missing data problems, but have not yet been applied to the potential outcomes setting of causal inference. Using simulated data we investigate the finite sample properties of these estimators as well as of competing procedures in a simple setting. Finally we illustrate our methods using a real randomized encouragement design study on the effectiveness of the influenza vaccine.  相似文献   

7.
Hongwei Zhao  Lili Tian 《Biometrics》2001,57(4):1002-1008
Medical cost estimation is very important to health care organizations and health policy makers. We consider cost-effectiveness analysis for competing treatments in a staggered-entry, survival-analysis-based clinical trial. We propose a method for estimating mean medical cost over patients in such settings. The proposed estimator is shown to be consistent and asymptotically normal, and its asymptotic variance can be obtained. In addition, we propose a method for estimating the incremental cost-effectiveness ratio and for obtaining a confidence interval for it. Simulation experiments are conducted to evaluate our proposed methods. Finally, we apply our methods to a clinical trial comparing the cost effectiveness of implanted cardiac defibrillators with conventional therapy for individuals at high risk for ventricular arrhythmias.  相似文献   

8.
9.
Recently, in order to accelerate drug development, trials that use adaptive seamless designs such as phase II/III clinical trials have been proposed. Phase II/III clinical trials combine traditional phases II and III into a single trial that is conducted in two stages. Using stage 1 data, an interim analysis is performed to answer phase II objectives and after collection of stage 2 data, a final confirmatory analysis is performed to answer phase III objectives. In this paper we consider phase II/III clinical trials in which, at stage 1, several experimental treatments are compared to a control and the apparently most effective experimental treatment is selected to continue to stage 2. Although these trials are attractive because the confirmatory analysis includes phase II data from stage 1, the inference methods used for trials that compare a single experimental treatment to a control and do not have an interim analysis are no longer appropriate. Several methods for analysing phase II/III clinical trials have been developed. These methods are recent and so there is little literature on extensive comparisons of their characteristics. In this paper we review and compare the various methods available for constructing confidence intervals after phase II/III clinical trials.  相似文献   

10.
Identification of large proteomics data sets is routinely performed using sophisticated software tools called search engines. Yet despite the importance of the identification process, its configuration and execution is often performed according to established lab habits, and is mostly unsupervised by detailed quality control. In order to establish easily obtainable quality control criteria that can be broadly applied to the identification process, we here introduce several simple quality control methods. An unbiased quality control of identification parameters will be conducted using target/decoy searches providing significant improvement over identification standards. MASCOT identifications were for instance increased by 13% at a constant level of confidence. The target/decoy approach can however not be universally applied. We therefore also quality control the application of this strategy itself, providing useful and intuitive metrics for evaluating the precision and robustness of the obtained false discovery rate.  相似文献   

11.
Wang H  Zhao H 《Biometrics》2006,62(2):570-575
With medical costs escalating over recent years, cost analysis is being conducted more and more to assess economic impact of new treatment options. An incremental cost-effectiveness ratio (ICER) is a measure that assesses the additional cost for a new treatment for each additional unit of effectiveness, such as saving 1 year of life. In this article, we consider cost-effectiveness analysis for new treatments evaluated in a randomized clinical trial setting with staggered entries. In particular, the censoring times are different for cost and survival data. We propose a method for estimating the ICER and obtaining its confidence interval when differential censoring exists. Simulation experiments are conducted to evaluate our proposed method. We also apply our methods to a clinical trial example comparing the cost-effectiveness of implanted defibrillators with conventional therapy for individuals with reduced left ventricular function after myocardial infarction.  相似文献   

12.
Meyer HE  Stühler K 《Proteomics》2007,7(Z1):18-26
Biomarkers allowing early detection of disease or therapy control have a huge influence in curing a disease. A wide variety of methods were applied to find new biomarkers. In contrast to methods focused on DNA or mRNA techniques, approaches considering proteins as potential biomarker candidates have the advantage that proteins are more diverse than DNA or RNA and are more reflective of a biological system. Here, we present an approach for the identification of new biomarkers relying on our experience from the past 10 years of proteomics, outlining a concept of "high-performance proteomics" This approach is based on quantitative proteome analysis using a sufficient number of clinical samples and statistical validation of proteomics data by independent methods, such as Western blot analysis or immunohistochemistry.  相似文献   

13.
Leichert LI 《Proteomics》2011,11(15):3023-3035
Protein quality control is an essential process in all living organisms. A network of folding helper proteins and proteases ushers proteins into their native conformation, safeguards their structure under adverse environmental conditions, and, if all else fails, degrades proteins at the end of their life time. Escherichia coli is a versatile model organism used in the analysis of fundamental cellular processes. Much of what we know about protein quality control has been discovered in this microorganism. In the investigation of the mode of action, regulation and substrate specificity of chaperones, thiol-disulfide isomerases and proteases, proteomic methods have been playing a key role. Here, we provide a condensed overview about the protein quality control network in E. coli and the remarkable contributions of proteomics to our current knowledge.  相似文献   

14.
BACKGROUND: JACIE Standards (FACT Standards in the USA) have been implemented in Europe since 1999. An on-site accreditation inspection took place at our center in January 2004. The purpose of this work was to develop a real-time process/quality control system meeting the JACIE Standards for HPC release. METHODS: Data from 194 HPC processing procedures for autologous transplantation performed over a 5-year period were analyzed. The results of different processing methods applied at our facility were compared: (1) cryopreservation without washing cells (n=50), (2) washing cells (n=87), (3) cell-density separation (n=12) and (4) positive CD34 selection (n=45). RESULTS: Four critical control points were set for the validation of HPC processing: (a) number of lost CD34(+) cells during processing, (b) contamination, (c) viability of the cells after thawing and (d) ability to reconstitute hematopoiesis after transplantation. On the basis of statistical analysis, ranges of acceptable values were defined for each critical control point and for each processing method. Those acceptable values were used for cell release and real-time quality control. DISCUSSION: This study describes a model for the validation of HPC processing and for a real-time process/quality control system for HPC release. Optimization of processing techniques, standardization of methods and comparison between facilities will open the way towards external quality controls and quality improvement.  相似文献   

15.
With the Quality-Control-Service (QCS) for blood coagulation a system for the statistical quality control of blood coagulation methods is presented. The system is based on the universal control plasma PreciClot which contains target values in the normal and abnormal range. As the control plasma is used daily from the participants for quality control exercises and datas are statistically analyzed each month this programme of quality assessment can be compared with a monthly ring trial. For the methods prothrombin time (PT/Quick), activated partial prothrombin time (APTT), fibrinogen assay (Fibrinogen) and thrombin time (Thrombin) datas of a survey period (January-December 1985) with 75 labs were evaluated. Calculated results for the methods are given and accuracy and precision of the methods are compared with the results of former ring trials. Based on the results the interlaboratory reliability of the methods is discussed and the advantages of QCS for blood Coagulation for a better information about quality of coagulation tests are presented.  相似文献   

16.
17.
18.
EST expression profiling provides an attractive tool for studying differential gene expression, but cDNA libraries' origins and EST data quality are not always known or reported. Libraries may originate from pooled or mixed tissues; EST clustering, EST counts, library annotations and analysis algorithms may contain errors. Traditional data analysis methods, including research into tissue-specific gene expression, assume EST counts to be correct and libraries to be correctly annotated, which is not always the case. Therefore, a method capable of assessing the quality of expression data based on that data alone would be invaluable for assessing the quality of EST data and determining their suitability for mRNA expression analysis. Here we report an approach to the selection of a small generic subset of 244 UniGene clusters suitable for identification of the tissue of origin for EST libraries and quality control of the expression data using EST expression information alone. We created a small expression matrix of UniGene IDs using two rounds of selection followed by two rounds of optimisation. Our selection procedures differ from traditional approaches to finding "tissue-specific" genes and our matrix yields consistency high positive correlation values for libraries with confirmed tissues of origin and can be applied for tissue typing and quality control of libraries as small as just a few hundred total ESTs. Furthermore, we can pick up tissue correlations between related tissues e.g. brain and peripheral nervous tissue, heart and muscle tissues and identify tissue origins for a few libraries of uncharacterised tissue identity. It was possible to confirm tissue identity for some libraries which have been derived from cancer tissues or have been normalised. Tissue matching is affected strongly by cancer progression or library normalisation and our approach may potentially be applied for elucidating the stage of normalisation in normalised libraries or for cancer staging.  相似文献   

19.
Data preprocessing including proper normalization and adequate quality control before complex data mining is crucial for studies using the cDNA microarray technology. We have developed a simple procedure that integrates data filtering and normalization with quantitative quality control of microarray experiments. Previously we have shown that data variability in a microarray experiment can be very well captured by a quality score q(com) that is defined for every spot, and the ratio distribution depends on q(com). Utilizing this knowledge, our data-filtering scheme allows the investigator to decide on the filtering stringency according to desired data variability, and our normalization procedure corrects the q(com)-dependent dye biases in terms of both the location and the spread of the ratio distribution. In addition, we propose a statistical model for false positive rate determination based on the design and the quality of a microarray experiment. The model predicts that a lower limit of 0.5 for the replicate concordance rate is needed in order to be certain of true positives. Our work demonstrates the importance and advantages of having a quantitative quality control scheme for microarrays.  相似文献   

20.
In recent years, developing the idea of “cancer big data” has emerged as a result of the significant expansion of various fields such as clinical research, genomics, proteomics and public health records. Advances in omics technologies are making a significant contribution to cancer big data in biomedicine and disease diagnosis. The increasingly availability of extensive cancer big data has set the stage for the development of multimodal artificial intelligence (AI) frameworks. These frameworks aim to analyze high-dimensional multi-omics data, extracting meaningful information that is challenging to obtain manually. Although interpretability and data quality remain critical challenges, these methods hold great promise for advancing our understanding of cancer biology and improving patient care and clinical outcomes. Here, we provide an overview of cancer big data and explore the applications of both traditional machine learning and deep learning approaches in cancer genomic and proteomic studies. We briefly discuss the challenges and potential of AI techniques in the integrated analysis of omics data, as well as the future direction of personalized treatment options in cancer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号