首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
High-throughput screening (HTS) is used in modern drug discovery to screen hundreds of thousands to millions of compounds on selected protein targets. It is an industrial-scale process relying on sophisticated automation and state-of-the-art detection technologies. Quality control (QC) is an integral part of the process and is used to ensure good quality data and mini mize assay variability while maintaining assay sensitivity. The authors describe new QC methods and show numerous real examples from their biologist-friendly Stat Server HTS application, a custom-developed software tool built from the commercially available S-PLUS and Stat Server statistical analysis and server software. This system remotely processes HTS data using powerful and sophisticated statistical methodology but insulates users from the technical details by outputting results in a variety of readily interpretable graphs and tables. It allows users to visualize HTS data and examine assay performance during the HTS campaign to quickly react to or avoid quality problems.  相似文献   

2.
MOTIVATION: High-throughput screening (HTS) plays a central role in modern drug discovery, allowing for testing of >100,000 compounds per screen. The aim of our work was to develop and implement methods for minimizing the impact of systematic error in the analysis of HTS data. To the best of our knowledge, two new data correction methods included in HTS-Corrector are not available in any existing commercial software or freeware. RESULTS: This paper describes HTS-Corrector, a software application for the analysis of HTS data, detection and visualization of systematic error, and corresponding correction of HTS signals. Three new methods for the statistical analysis and correction of raw HTS data are included in HTS-Corrector: background evaluation, well correction and hit-sigma distribution procedures intended to minimize the impact of systematic errors. We discuss the main features of HTS-Corrector and demonstrate the benefits of the algorithms.  相似文献   

3.
The stochastic nature of high-throughput screening (HTS) data indicates that information may be gleaned by applying statistical methods to HTS data. A foundation of parametric statistics is the study and elucidation of population distributions, which can be modeled using modern spreadsheet software. The methods and results described here use fundamental concepts of statistical population distributions analyzed using a spreadsheet to provide tools in a developing armamentarium for extracting information from HTS data. Specific examples using two HTS kinase assays are analyzed. The analyses use normal and gamma distributions, which combine to form mixture distributions. HTS data were found to be described well using such mixture distributions, and deconvolution of the mixtures to the constituent gamma and normal parts provided insight into how the assays performed. In particular, the proportion of hits confirmed was predicted from the original HTS data and used to assess screening assay performance. The analyses also provide a method for determining how hit thresholds--values used to separate active from inactive compounds--affect the proportion of compounds verified as active and how the threshold can be chosen to optimize the selection process.  相似文献   

4.
High-throughput screening (HTS) has achieved a dominant role in drug discovery over the past 2 decades. The goal of HTS is to identify active compounds (hits) by screening large numbers of diverse chemical compounds against selected targets and/or cellular phenotypes. The HTS process consists of multiple automated steps involving compound handling, liquid transfers, and assay signal capture, all of which unavoidably contribute to systematic variation in the screening data. The challenge is to distinguish biologically active compounds from assay variability. Traditional plate controls-based and non-controls-based statistical methods have been widely used for HTS data processing and active identification by both the pharmaceutical industry and academic sectors. More recently, improved robust statistical methods have been introduced, reducing the impact of systematic row/column effects in HTS data. To apply such robust methods effectively and properly, we need to understand their necessity and functionality. Data from 6 HTS case histories are presented to illustrate that robust statistical methods may sometimes be misleading and can result in more, rather than less, false positives or false negatives. In practice, no single method is the best hit detection method for every HTS data set. However, to aid the selection of the most appropriate HTS data-processing and active identification methods, the authors developed a 3-step statistical decision methodology. Step 1 is to determine the most appropriate HTS data-processing method and establish criteria for quality control review and active identification from 3-day assay signal window and DMSO validation tests. Step 2 is to perform a multilevel statistical and graphical review of the screening data to exclude data that fall outside the quality control criteria. Step 3 is to apply the established active criterion to the quality-assured data to identify the active compounds.  相似文献   

5.
High throughput screening (HTS) is a widely used effective approach in genome-wide association and large scale protein expression studies, drug discovery, and biomedical imaging research. How to accurately identify candidate ‘targets’ or biologically meaningful features with a high degree of confidence has led to extensive statistical research in an effort to minimize both false-positive and false-negative rates. A large body of literature on this topic with in-depth statistical contents is available. We examine currently available statistical methods on HTS and aim to summarize some selected methods into a concise, easy-tofollow introduction for experimental biologists.  相似文献   

6.
This work describes a novel semi-sequential technique for in silico enhancement of high-throughput screening (HTS) experiments now employed at Novartis. It is used in situations in which the size of the screen is limited by the readout (e.g., high-content screens) or the amount of reagents or tools (proteins or cells) available. By performing computational chemical diversity selection on a per plate basis (instead of a per compound basis), 25% of the 1,000,000-compound screening was optimized for general initial HTS. Statistical models are then generated from target-specific primary results (percentage inhibition data) to drive the cherry picking and testing from the entire collection. Using retrospective analysis of 11 HTS campaigns, the authors show that this method would have captured on average two thirds of the active compounds (IC(50) < 10 microM) and three fourths of the active Murcko scaffolds while decreasing screening expenditure by nearly 75%. This result is true for a wide variety of targets, including G-protein-coupled receptors, chemokine receptors, kinases, metalloproteinases, pathway screens, and protein-protein interactions. Unlike time-consuming "classic" sequential approaches that require multiple iterations of cherry picking, testing, and building statistical models, here individual compounds are cherry picked just once, based directly on primary screening data. Strikingly, the authors demonstrate that models built from primary data are as robust as models built from IC(50) data. This is true for all HTS campaigns analyzed, which represent a wide variety of target classes and assay types.  相似文献   

7.
We designed and developed NEXUS--a new natural products screening database and related suite of software applications--to utilize the spectacular increases in assay capacity of the modern high throughput screening (HTS) environment. NEXUS not only supports seamless integration with separate HTS systems, but supports user-customized integration with external laboratory automation, particularly sample preparation systems. Designed and developed based on a detailed process model for natural products drug discovery, NEXUS comprises two integrated parts: (1) a single schema of Oracle tables and callable procedures and functions, and (2) software "front-ends" to the database developed using Microsoft Excel and Oracle Discovery/2000. Many of the back-end processing functions were written in Programming Language/Structured Query Language (PL/SQL) to provide an Application Programmer's Interface, which allows end users to create custom applications with little input from information technology professionals.  相似文献   

8.
Following the success of small-molecule high-throughput screening (HTS) in drug discovery, other large-scale screening techniques are currently revolutionizing the biological sciences. Powerful new statistical tools have been developed to analyze the vast amounts of data in DNA chip studies, but have not yet found their way into compound screening. In HTS, characterization of single-point hit lists is often done only in retrospect after the results of confirmation experiments are available. However, for prioritization, for optimal use of resources, for quality control, and for comparison of screens it would be extremely valuable to predict the rates of false positives and false negatives directly from the primary screening results. Making full use of the available information about compounds and controls contained in HTS results and replicated pilot runs, the Z score and from it the p value can be estimated for each measurement. Based on this consideration, we have applied the concept of p-value distribution analysis (PVDA), which was originally developed for gene expression studies, to HTS data. PVDA allowed prediction of all relevant error rates as well as the rate of true inactives, and excellent agreement with confirmation experiments was found.  相似文献   

9.

Background  

High-throughput screening (HTS) is a key part of the drug discovery process during which thousands of chemical compounds are screened and their activity levels measured in order to identify potential drug candidates (i.e., hits). Many technical, procedural or environmental factors can cause systematic measurement error or inequalities in the conditions in which the measurements are taken. Such systematic error has the potential to critically affect the hit selection process. Several error correction methods and software have been developed to address this issue in the context of experimental HTS [17]. Despite their power to reduce the impact of systematic error when applied to error perturbed datasets, those methods also have one disadvantage - they introduce a bias when applied to data not containing any systematic error [6]. Hence, we need first to assess the presence of systematic error in a given HTS assay and then carry out systematic error correction method if and only if the presence of systematic error has been confirmed by statistical tests.  相似文献   

10.
The identification of novel therapeutic targets and characterization of their 3D structures is increasing at a dramatic rate. Computational screening methods continue to be developed and improved as credible and complementary alternatives to high-throughput biochemical compound screening (HTS). While the majority of drug candidates currently being developed have been found using HTS methods, high-throughput docking and pharmacophore-based searching algorithms are gaining acceptance and becoming a major source of lead molecules in drug discovery. Refinements and optimization of high-throughput docking methods have lead to improvements in reproducing experimental data and in hit rates obtained, validating their use in hit identification. In parallel with virtual screening methods, concomitant developments in cheminformatics including identification, design and manipulation of drug-like small molecule libraries have been achieved. Herein, currently used in silico screening techniques and their utility on a comparative and target dependent basis is discussed.  相似文献   

11.
Within the pharmaceutical industry, significant resources have been applied to the identification of new drug compound leads through the use of high-throughput screening (HTS). To meet the demand for rapid analytical characterization of biologically active samples identified by HTS, the technique of high-performance liquid chromatography–electrospray ionization mass spectrometry (HPLC–ESI-MS) has been utilized, and the application of this technique specifically for the integration of natural product sample mixtures into modern HTS is reviewed. The high resolution provided by reversed-phase HPLC coupled with the gentle and relatively universal ionization facilitated by the electrospray process has had significant impact upon a variety of procedures associated with the HTS of natural products, including extract sample diversity evaluation, dereplication, structure elucidation, preparative isolation, and affinity-based biological activity evaluation.  相似文献   

12.
The drug discovery process pursued by major pharmaceutical companies for many years starts with target identification followed by high-throughput screening (HTS) with the goal of identifying lead compounds. To accomplish this goal, significant resources are invested into automation of the screening process or HTS. Robotic systems capable of handling thousands of data points per day are implemented across the pharmaceutical sector. Many of these systems are amenable to handling cell-based screening protocols as well. On the other hand, as companies strive to develop innovative products based on novel mechanisms of action(s), one of the current bottlenecks of the industry is the target validation process. Traditionally, bioinformatics and HTS groups operate separately at different stages of the drug discovery process. The authors describe the convergence and integration of HTS and bioinformatics to perform high-throughput target functional identification and validation. As an example of this approach, they initiated a project with a functional cell-based screen for a biological process of interest using libraries of small interfering RNA (siRNA) molecules. In this protocol, siRNAs function as potent gene-specific inhibitors. siRNA-mediated knockdown of the target genes is confirmed by TaqMan analysis, and genes with impacts on biological functions of interest are selected for further analysis. Once the genes are confirmed and further validated, they may be used for HTS to yield lead compounds.  相似文献   

13.
The zebrafish, Danio rerio, a small, tropical freshwater species native to Pakistan and India, has become a National Institutes of Health-sanctioned model organism and, due to its many advantages as an experimental vertebrate, it has garnered intense interest from the world's scientific community. Some have labeled the zebrafish, the "vertebrate Drosophila," due to its genetic tractability, small size, low cost, and rapid development. The transparency of the embryo, external development, and the many hundreds of mutant and transgenic lines available add to the allure. Now it appears, the zebrafish can be used for high-throughput screening (HTS) of drug libraries in the discovery process of promising new therapeutics. In this review, various types of screening methods are briefly outlined, as are a variety of screens for different disease models, to highlight the range of zebrafish HTS possibilities. High-content screening (HCS) has been available for cell-based screens for some time and, very recently, HCS is being adapted for the zebrafish. This will allow analysis, at high resolution, of drug effects on whole vertebrates; thus, whole body effects as well as those on specific organs and tissues may be determined.  相似文献   

14.
A functional cell-based assay was developed using a generic proprietary assay protocol, based on a membrane-potential sensitive dye, for the identification of small-molecule antagonists against the Kv1.3 potassium ion channel. A high-throughput screen (HTS) was subsequently performed with 20,000 compounds from the Evotec library, preselected using known small molecule antagonists for both sodium and potassium ion channels. Following data analysis, the hit rate was measured at 1.72%, and subsequent dose-response analysis of selected hits showed a high hit confirmation rate yielding approximately 50 compounds with an apparent IC50 value lower than 10 microM. Subsequent electrophysiological characterization of selected hits confirmed the initial activity and potency of the identified hits on the Kv1.3 target and also selectivity toward Kv1.3 through measurements on HERG as well as Kv1.3-expressing cell lines. Follow-up structure-activity relationship analysis revealed a variety of different clusters distributed throughout the library as well as several singlicates. In comparison to known Kv1.3 blockers, new chemical entities and scaffolds showing potency and selectivity against the Kv1.3 ion channel were detected. In addition, a screening strategy for ion channel drug discovery HTS, medicinal chemistry, and electrophysiology is presented.  相似文献   

15.
The ability to identify active compounds (3hits2) from large chemical libraries accurately and rapidly has been the ultimate goal in developing high-throughput screening (HTS) assays. The ability to identify hits from a particular HTS assay depends largely on the suitability or quality of the assay used in the screening. The criteria or parameters for evaluating the 3suitability2 of an HTS assay for hit identification are not well defined and hence it still remains difficult to compare the quality of assays directly. In this report, a screening window coefficient, called 3Z-factor,2 is defined. This coefficient is reflective of both the assay signal dynamic range and the data variation associated with the signal measurements, and therefore is suitable for assay quality assessment. The Z-factor is a dimensionless, simple statistical characteristic for each HTS assay. The Z-factor provides a useful tool for comparison and evaluation of the quality of assays, and can be utilized in assay optimization and validation.  相似文献   

16.
MOTIVATION: High-throughput screening (HTS) is an early-stage process in drug discovery which allows thousands of chemical compounds to be tested in a single study. We report a method for correcting HTS data prior to the hit selection process (i.e. selection of active compounds). The proposed correction minimizes the impact of systematic errors which may affect the hit selection in HTS. The introduced method, called a well correction, proceeds by correcting the distribution of measurements within wells of a given HTS assay. We use simulated and experimental data to illustrate the advantages of the new method compared to other widely-used methods of data correction and hit selection in HTS. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.  相似文献   

17.
High throughput drug screening has become a critical component of the drug discovery process. The screening of libraries containing hundreds of thousands of compounds has resulted in a requirement for assays and instrumentation that are amenable to nonradioactive formats and that can be miniaturized. Homogeneous assays that minimize upstream automation of the individual assays are also preferable. Fluorometric microvolume assay technology (FMAT) is a fluorescence-based platform for the development of nonradioactive cell- and bead-based assays for HTS. This technology is plate format-independent, and while it was designed specifically for homogeneous ligand binding and immunological assays, it is amenable to any assay utilizing a fluorescent cell or bead. The instrument fits on a standard laboratory bench and consists of a laser scanner that generates a 1 mm(2) digitized image of a 100-μmm deep section of the bottom of a microwell plate. The instrument is directly compatible with a Zymark Twistertrade mark (Zymark Corp., Hopkinton, MA) for robotic loading of the scanner and unattended operation in HTS mode. Fluorescent cells or beads at the bottom of the well are detected as localized areas of concentrated fluorescence using data processing. Unbound flurophore comprising the background signal is ignored, allowing for the development of a wide variety of homogeneous assays. The use of FMAT for peptide ligand binding assays, immunofluorescence, apoptosis and cytotoxicity, and bead-based immunocapture assays is described here, along with a general overview of the instrument and software.  相似文献   

18.
High-throughput screening (HTS) of large chemical libraries has become the main source of new lead compounds for drug development. Several specialized detection technologies have been developed to facilitate the cost- and time-efficient screening of millions of compounds. However, concerns have been raised, claiming that different HTS technologies may produce different hits, thus limiting trust in the reliability of HTS data. This study was aimed to investigate the reliability of the authors most frequently used assay techniques: scintillation proximity assay (SPA) and homogeneous time-resolved fluorescence resonance energy transfer (TR-FRET). To investigate the data concordance between these 2 detection technologies, the authors screened a large subset of the Schering compound library consisting of 300,000 compounds for inhibitors of a nonreceptor tyrosine kinase. They chose to set up this study in realistic HTS scale to ensure statistical significance of the results. The findings clearly demonstrate that the choice of detection technology has no significant impact on hit finding, provided that assays are biochemically equivalent. Data concordance is up to 90%. The little differences in hit findings are caused by threshold setting but not by systematic differences between the technologies. The most significant difference between the compared techniques is that in the SPA format, more false-positive primary hits were obtained.  相似文献   

19.
The advent of high-throughput sequencing (HTS) methods has enabled direct approaches to quantitatively profile small RNA populations. However, these methods have been limited by several factors, including representational artifacts and lack of established statistical methods of analysis. Furthermore, massive HTS data sets present new problems related to data processing and mapping to a reference genome. Here, we show that cluster-based sequencing-by-synthesis technology is highly reproducible as a quantitative profiling tool for several classes of small RNA from Arabidopsis thaliana. We introduce the use of synthetic RNA oligoribonucleotide standards to facilitate objective normalization between HTS data sets, and adapt microarray-type methods for statistical analysis of multiple samples. These methods were tested successfully using mutants with small RNA biogenesis (miRNA-defective dcl1 mutant and siRNA-defective dcl2 dcl3 dcl4 triple mutant) or effector protein (ago1 mutant) deficiencies. Computational methods were also developed to rapidly and accurately parse, quantify, and map small RNA data.  相似文献   

20.
Oncology drug discovery is, by definition, a target-rich enterprise. High-throughput screening (HTS) laboratories have supported a wide array of molecularly targeted and chemical genomic approaches for anticancer lead generation, and the number of hits emerging from such campaigns has increased dramatically. Although automation of HTS processes has eliminated primary screening as a bottleneck, the demands on secondary screening in appropriate cell-based assays have increased concomitantly with the numbers of hits delivered to therapeutic area laboratories. The authors describe herein the implementation of a novel platform using off-the-shelf solutions that have allowed them to efficiently characterize hundreds of HTS hits using a palette of Western blot-based pharmacodynamic assays. The platform employs a combination of a flatbed bufferless SDS-PAGE system, a dry ultra-rapid electroblotting apparatus, and a highly sensitive and quantitative infrared imaging system. Cumulatively, this platform has significantly reduced the cycle time for HTS hit evaluation. In addition, the routine use of this platform has resulted in higher quality data that have allowed the development of structure-activity databases that have tangibly improved lead optimization. The authors describe in detail the application of this platform, designated the Accelerated Pharmaco-Dynamic Profiler (APDP), to the annotation of inhibitors of 2 attractive oncology targets, BRAF kinase and Hsp90.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号