首页 | 本学科首页   官方微博 | 高级检索  
   检索      


A simple strategy for mitigating the effect of data variability on the identification of active chemotypes from high-throughput screening data
Authors:Johnson Stephen R  Padmanabha Ramesh  Vaccaro Wayne  Hermsmeier Mark  Cacace Angela  Lawrence Mike  Dickey Joyce  Esposito Kim  Pike Kristen  Wong Victoria  Poss Michael  Loughney Deborah  Tebben Andrew
Institution:Pharmaceutical Research Institute, Bristol-Myers Squibb, Princeton, NJ 08543-4000, USA. stephen.johnson@bms.com
Abstract:Among the several goals of a high-throughput screening campaign is the identification of as many active chemotypes as possible for further evaluation. Often, however, the number of concentration response curves (e.g., IC(50)s or K(i)s) that can be collected following a primary screen is limited by practical constraints such as protein supply, screening workload, and so forth. One possible approach to this dilemma is to cluster the hits from the primary screen and sample only a few compounds from each cluster. This introduces the question as to how many compounds must be selected from a cluster to ensure that an active compound is identified, if it exists at all. This article seeks to address this question using a Monte Carlo simulation in which the dependence of the success of sampling is directly linked to screening data variability. Furthermore, the authors demonstrate that the use of replicated compounds in the screening collection can easily assess this variability and provide a priori guidance to the screener and chemist as to the extent of sampling required to maximize chemotype identification during the triage process. The individual steps of the Monte Carlo simulation provide insight into the correspondence between the percentage inhibition and eventual IC(50) curves.
Keywords:
本文献已被 PubMed 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号