首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 20 毫秒
1.
High-throughput screening (HTS) is an efficient technology for drug discovery. It allows for screening of more than 100,000 compounds a day per screen and requires effective procedures for quality control. The authors have developed a method for evaluating a background surface of an HTS assay; it can be used to correct raw HTS data. This correction is necessary to take into account systematic errors that may affect the procedure of hit selection. The described method allows one to analyze experimental HTS data and determine trends and local fluctuations of the corresponding background surfaces. For an assay with a large number of plates, the deviations of the background surface from a plane are caused by systematic errors. Their influence can be minimized by the subtraction of the systematic background from the raw data. Two experimental HTS assays from the ChemBank database are examined in this article. The systematic error present in these data was estimated and removed from them. It enabled the authors to correct the hit selection procedure for both assays.  相似文献   

2.
High-throughput screening (HTS) of large-scale RNA interference (RNAi) libraries has become an increasingly popular method of functional genomics in recent years. Cell-based assays used for RNAi screening often produce small dynamic ranges and significant variability because of the combination of cellular heterogeneity, transfection efficiency, and the intrinsic nature of the genes being targeted. These properties make reliable hit selection in the RNAi screen a difficult task. The use of robust methods based on median and median absolute deviation (MAD) has been suggested to improve hit selection in such cases, but mean and standard deviation (SD)-based methods are still predominantly used in many RNAi HTS. In an experimental approach to compare these 2 methods, a genome-scale small interfering RNA (siRNA) screen was performed, in which the identification of novel targets increasing the therapeutic index of the chemotherapeutic agent mitomycin C (MMC) was sought. MAD values were resistant to the presence of outliers, and the hits selected by the MAD-based method included all the hits that would be selected by SD-based method as well as a significant number of additional hits. When retested in triplicate, a similar percentage of these siRNAs were shown to genuinely sensitize cells to MMC compared with the hits shared between SD- and MAD-based methods. Confirmed hits were enriched with the genes involved in the DNA damage response and cell cycle regulation, validating the overall hit selection strategy. Finally, computer simulations showed the superiority and generality of the MAD-based method in various RNAi HTS data models. In conclusion, the authors demonstrate that the MAD-based hit selection method rescued physiologically relevant false negatives that would have been missed in the SD-based method, and they believe it to be the desirable 1st-choice hit selection method for RNAi screen results.  相似文献   

3.
MOTIVATION: High-throughput screening (HTS) is an early-stage process in drug discovery which allows thousands of chemical compounds to be tested in a single study. We report a method for correcting HTS data prior to the hit selection process (i.e. selection of active compounds). The proposed correction minimizes the impact of systematic errors which may affect the hit selection in HTS. The introduced method, called a well correction, proceeds by correcting the distribution of measurements within wells of a given HTS assay. We use simulated and experimental data to illustrate the advantages of the new method compared to other widely-used methods of data correction and hit selection in HTS. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.  相似文献   

4.
The synthesis and structure-activity relationships (SAR) of a series of indane and tetralin inhibitors of the type 1 glycine transporter, derived from a high-throughput screening (HTS) hit, are described. Key modifications that reduced the 5HT1B receptor affinity of the HTS hit and the P450 2D6 inhibition of subsequent analogues are delineated. While these modifications led to potent and selective GlyT1 inhibitors, HERG affinity and human microsomal clearance remain an issue for this series of compounds.  相似文献   

5.
Following the success of small-molecule high-throughput screening (HTS) in drug discovery, other large-scale screening techniques are currently revolutionizing the biological sciences. Powerful new statistical tools have been developed to analyze the vast amounts of data in DNA chip studies, but have not yet found their way into compound screening. In HTS, characterization of single-point hit lists is often done only in retrospect after the results of confirmation experiments are available. However, for prioritization, for optimal use of resources, for quality control, and for comparison of screens it would be extremely valuable to predict the rates of false positives and false negatives directly from the primary screening results. Making full use of the available information about compounds and controls contained in HTS results and replicated pilot runs, the Z score and from it the p value can be estimated for each measurement. Based on this consideration, we have applied the concept of p-value distribution analysis (PVDA), which was originally developed for gene expression studies, to HTS data. PVDA allowed prediction of all relevant error rates as well as the rate of true inactives, and excellent agreement with confirmation experiments was found.  相似文献   

6.
7.
Human African Trypanosomiasis (HAT) is caused by two trypanosome sub-species, Trypanosoma brucei rhodesiense and Trypanosoma brucei gambiense. Drugs available for the treatment of HAT have significant issues related to difficult administration regimes and limited efficacy across species and disease stages. Hence, there is considerable need to find new alternative and less toxic drugs. An approach to identify starting points for new drug candidates is high throughput screening (HTS) of large compound library collections. We describe the application of an Alamar Blue based, 384-well HTS assay to screen a library of 87,296 compounds against the related trypanosome subspecies, Trypanosoma brucei brucei bloodstream form lister 427. Primary hits identified against T.b. brucei were retested and the IC50 value compounds were estimated for T.b. brucei and a mammalian cell line HEK293, to determine a selectivity index for each compound. The screening campaign identified 205 compounds with greater than 10 times selectivity against T.b. brucei. Cluster analysis of these compounds, taking into account chemical and structural properties required for drug-like compounds, afforded a panel of eight compounds for further biological analysis. These compounds had IC50 values ranging from 0.22 µM to 4 µM with associated selectivity indices ranging from 19 to greater than 345. Further testing against T.b. rhodesiense led to the selection of 6 compounds from 5 new chemical classes with activity against the causative species of HAT, which can be considered potential candidates for HAT early drug discovery. Structure activity relationship (SAR) mining revealed components of those hit compound structures that may be important for biological activity. Four of these compounds have undergone further testing to 1) determine whether they are cidal or static in vitro at the minimum inhibitory concentration (MIC), and 2) estimate the time to kill.  相似文献   

8.
A severe drawback in the high-throughput screening (HTS) process is the unintentional (random) presence of false positives and negatives. Their rates depend, among others, on the screening process being applied and the target class. Although false positives can be sorted out in subsequent process steps, their occurrence can lead to increased project cost. More fundamentally, it is not possible to rescue false nonhits. In this article, we investigate the prediction of the primary hit rate, hit confirmation rate, and false-positive and false-negative rates. Results for approximately 2800 compounds are considered that are tested as a pilot screen ahead of the primary screening work. This pilot screen is done at several concentrations and in replicates. The rates are predicted as a function of the proposed hit threshold by having the replicates serve as each other's confirmers, and confidence limits to the prediction are attached by means of a resampling scheme. A comparison of the rates resulting from the resampling with the primary hit rate and the confirmation rates obtained during the screening campaign shows how accurate this method is. Hence, the "optimal" compound concentration for the screen as well as the optimal hit threshold corresponding to low false rates can be determined prior to starting the subsequent screening campaign.  相似文献   

9.
High-throughput screening (HTS) has become an essential part of the drug discovery process. Due to the rising requirements for both data quality and quantity, along with increased screening cost and the demand to shorten the time for lead identification, increasing throughput and cost-effectiveness has become a necessity in the hit identification process. The authors present a multiplexed HTS for 2 nuclear receptors, the farnesoid X-activated receptor and the peroxisome proliferator-activated receptor delta in a viable cell-based reporter gene assay. The 2 nuclear receptors were individually transfected into human hepatoma cells, and the transient transfected cell lines were pooled for the multiplexed screen. Hits identified by the multiplexed screen are similar to those identified by the individual receptor screens. Furthermore, the multiplexed screen provides selectivity information if ligands selective for one and not the other receptor are one of the hit criteria. The data demonstrate that multiplexing nuclear receptors can be a simple, efficient, cost-effective, and reliable alternative to traditional HTS of individual targets without compromising data quality.  相似文献   

10.
High-throughput screening (HTS) has achieved a dominant role in drug discovery over the past 2 decades. The goal of HTS is to identify active compounds (hits) by screening large numbers of diverse chemical compounds against selected targets and/or cellular phenotypes. The HTS process consists of multiple automated steps involving compound handling, liquid transfers, and assay signal capture, all of which unavoidably contribute to systematic variation in the screening data. The challenge is to distinguish biologically active compounds from assay variability. Traditional plate controls-based and non-controls-based statistical methods have been widely used for HTS data processing and active identification by both the pharmaceutical industry and academic sectors. More recently, improved robust statistical methods have been introduced, reducing the impact of systematic row/column effects in HTS data. To apply such robust methods effectively and properly, we need to understand their necessity and functionality. Data from 6 HTS case histories are presented to illustrate that robust statistical methods may sometimes be misleading and can result in more, rather than less, false positives or false negatives. In practice, no single method is the best hit detection method for every HTS data set. However, to aid the selection of the most appropriate HTS data-processing and active identification methods, the authors developed a 3-step statistical decision methodology. Step 1 is to determine the most appropriate HTS data-processing method and establish criteria for quality control review and active identification from 3-day assay signal window and DMSO validation tests. Step 2 is to perform a multilevel statistical and graphical review of the screening data to exclude data that fall outside the quality control criteria. Step 3 is to apply the established active criterion to the quality-assured data to identify the active compounds.  相似文献   

11.
Lead discovery in the pharmaceutical environment is largely an industrial-scale process in which it is typical to screen 1-5 million compounds in a matter of weeks using High Throughput Screening (HTS). This process is a very costly endeavor. Typically a HTS campaign of 1 million compounds will cost anywhere from $500000 to $1000000. There is consequently a great deal of pressure to maximize the return on investment by finding fast and more effective ways to screen. A panacea that has emerged over the past few years to help address this issue is in silico screening. In silico screening is now incorporated in all areas of lead discovery; from target identification and library design, to hit analysis and compound profiling. However, as lead discovery has evolved over the past few years, so has the role of in silico screening.  相似文献   

12.
Surface Plasmon Resonance (SPR) is rarely used as a primary High-throughput Screening (HTS) tool in fragment-based approaches. With SPR instruments becoming increasingly high-throughput it is now possible to use SPR as a primary tool for fragment finding. SPR becomes, therefore, a valuable tool in the screening of difficult targets such as the ubiquitin E3 ligase Parkin. As a prerequisite for the screen, a large number of SPR tests were performed to characterize and validate the active form of Parkin. A set of compounds was designed and used to define optimal SPR assay conditions for this fragment screen. Using these conditions, more than 5000 pre-selected fragments from our in-house library were screened for binding to Parkin. Additionally, all fragments were simultaneously screened for binding to two off target proteins to exclude promiscuous binding compounds. A low hit rate was observed that is in line with hit rates usually obtained by other HTS screening assays. All hits were further tested in dose responses on the target protein by SPR for confirmation before channeling the hits into Nuclear Magnetic Resonance (NMR) and other hit-confirmation assays.  相似文献   

13.
The process of identifying active targets (hits) in high-throughput screening (HTS) usually involves 2 steps: first, removing or adjusting for systematic variation in the measurement process so that extreme values represent strong biological activity instead of systematic biases such as plate effect or edge effect and, second, choosing a meaningful cutoff on the calculated statistic to declare positive compounds. Both false-positive and false-negative errors are inevitable in this process. Common control or estimation of error rates is often based on an assumption of normal distribution of the noise. The error rates in hit detection, especially false-negative rates, are hard to verify because in most assays, only compounds selected in primary screening are followed up in confirmation experiments. In this article, the authors take advantage of a quantitative HTS experiment in which all compounds are tested 42 times over a wide range of 14 concentrations so true positives can be found through a dose-response curve. Using the activity status defined by dose curve, the authors analyzed the effect of various data-processing procedures on the sensitivity and specificity of hit detection, the control of error rate, and hit confirmation. A new summary score is proposed and demonstrated to perform well in hit detection and useful in confirmation rate estimation. In general, adjusting for positional effects is beneficial, but a robust test can prevent overadjustment. Error rates estimated based on normal assumption do not agree with actual error rates, for the tails of noise distribution deviate from normal distribution. However, false discovery rate based on empirically estimated null distribution is very close to observed false discovery proportion.  相似文献   

14.
Jenkins JL  Kao RY  Shapiro R 《Proteins》2003,50(1):81-93
"Hit lists" generated by high-throughput screening (HTS) typically contain a large percentage of false positives, making follow-up assays necessary to distinguish active from inactive substances. Here we present a method for improving the accuracy of HTS hit lists by computationally based virtual screening (VS) of the corresponding chemical libraries and selecting hits by HTS/VS consensus. This approach was applied in a case study on the target-enzyme angiogenin, a potent inducer of angiogenesis. In conjunction with HTS of the National Cancer Institute Diversity Set and ChemBridge DIVERSet E (approximately 18,000 compounds total), VS was performed with two flexible library docking/scoring methods, DockVision/Ludi and GOLD. Analysis of the results reveals that dramatic enrichment of the HTS hit rate can be achieved by selecting compounds in consensus with one or both of the VS functions. For example, HTS hits ranked in the top 2% by GOLD included 42% of the true hits, but only 8% of the false positives; this represents a sixfold enrichment over the HTS hit rate. Notably, the HTS/VS method was effective in selecting out inhibitors with midmicromolar dissociation constants typical of leads commonly obtained in primary screens.  相似文献   

15.
Exploring various cyclization strategies, using a submicromolar pyrazole HTS screening hit 6 as a starting point, a novel indazole based CCR1 antagonist core was discovered. This report presents the design and SAR of CCR1 indazole and azaindazole antagonists leading to the identification of three development compounds, including 19e that was advanced to early clinical trials.  相似文献   

16.
Frequent hitters are compounds that are detected as a "hit" in multiple high-throughput screening (HTS) assays. Such behavior is specific (e.g., target family related) or unspecific (e.g., reactive compounds) or can result from a combination of such behaviors. Detecting such hits while predicting the underlying reason behind their promiscuous behavior is desirable because it provides valuable information not only about the compounds themselves but also about the assay methodology and target classes at hand. This information can also greatly reduce cost and time during HTS hit profiling. The present study exemplifies how to mine large HTS data repositories, such as the one at Boehringer Ingelheim, to identify frequent hitters, gain further insights into the causes of promiscuous behavior, and generate models for predicting promiscuous compounds. Applications of this approach are demonstrated using two recent large-scale HTS assays. The authors believe this analysis and its concrete applications are valuable tools for streamlining and accelerating decision-making processes during the course of hit discovery.  相似文献   

17.
The stochastic nature of high-throughput screening (HTS) data indicates that information may be gleaned by applying statistical methods to HTS data. A foundation of parametric statistics is the study and elucidation of population distributions, which can be modeled using modern spreadsheet software. The methods and results described here use fundamental concepts of statistical population distributions analyzed using a spreadsheet to provide tools in a developing armamentarium for extracting information from HTS data. Specific examples using two HTS kinase assays are analyzed. The analyses use normal and gamma distributions, which combine to form mixture distributions. HTS data were found to be described well using such mixture distributions, and deconvolution of the mixtures to the constituent gamma and normal parts provided insight into how the assays performed. In particular, the proportion of hits confirmed was predicted from the original HTS data and used to assess screening assay performance. The analyses also provide a method for determining how hit thresholds--values used to separate active from inactive compounds--affect the proportion of compounds verified as active and how the threshold can be chosen to optimize the selection process.  相似文献   

18.
This article reports a successful application of support vector machines (SVMs) in mining high-throughput screening (HTS) data of a type I methionine aminopeptidases (MetAPs) inhibition study. A library with 43,736 small organic molecules was used in the study, and 1355 compounds in the library with 40% or higher inhibition activity were considered as active. The data set was randomly split into a training set and a test set (3:1 ratio). The authors were able to rank compounds in the test set using their decision values predicted by SVM models that were built on the training set. They defined a novel score PT50, the percentage of the test set needed to be screened to recover 50% of the actives, to measure the performance of the models. With carefully selected parameters, SVM models increased the hit rates significantly, and 50% of the active compounds could be recovered by screening just 7% of the test set. The authors found that the size of the training set played a significant role in the performance of the models. A training set with 10,000 member compounds is likely the minimum size required to build a model with reasonable predictive power.  相似文献   

19.
The drug discovery process pursued by major pharmaceutical companies for many years starts with target identification followed by high-throughput screening (HTS) with the goal of identifying lead compounds. To accomplish this goal, significant resources are invested into automation of the screening process or HTS. Robotic systems capable of handling thousands of data points per day are implemented across the pharmaceutical sector. Many of these systems are amenable to handling cell-based screening protocols as well. On the other hand, as companies strive to develop innovative products based on novel mechanisms of action(s), one of the current bottlenecks of the industry is the target validation process. Traditionally, bioinformatics and HTS groups operate separately at different stages of the drug discovery process. The authors describe the convergence and integration of HTS and bioinformatics to perform high-throughput target functional identification and validation. As an example of this approach, they initiated a project with a functional cell-based screen for a biological process of interest using libraries of small interfering RNA (siRNA) molecules. In this protocol, siRNAs function as potent gene-specific inhibitors. siRNA-mediated knockdown of the target genes is confirmed by TaqMan analysis, and genes with impacts on biological functions of interest are selected for further analysis. Once the genes are confirmed and further validated, they may be used for HTS to yield lead compounds.  相似文献   

20.
High-throughput screening (HTS) of large chemical libraries has become the main source of new lead compounds for drug development. Several specialized detection technologies have been developed to facilitate the cost- and time-efficient screening of millions of compounds. However, concerns have been raised, claiming that different HTS technologies may produce different hits, thus limiting trust in the reliability of HTS data. This study was aimed to investigate the reliability of the authors most frequently used assay techniques: scintillation proximity assay (SPA) and homogeneous time-resolved fluorescence resonance energy transfer (TR-FRET). To investigate the data concordance between these 2 detection technologies, the authors screened a large subset of the Schering compound library consisting of 300,000 compounds for inhibitors of a nonreceptor tyrosine kinase. They chose to set up this study in realistic HTS scale to ensure statistical significance of the results. The findings clearly demonstrate that the choice of detection technology has no significant impact on hit finding, provided that assays are biochemically equivalent. Data concordance is up to 90%. The little differences in hit findings are caused by threshold setting but not by systematic differences between the technologies. The most significant difference between the compared techniques is that in the SPA format, more false-positive primary hits were obtained.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号