首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Tools for target identification and validation   总被引:3,自引:0,他引:3  
  相似文献   

2.
The advent of high-throughput DNA sequencing techniques, array technology and protein analysis has increased the efficiency of research in bovine muscle physiology, with the ultimate objective of improving beef quality either by breeding or rearing factors. For genetic purposes, polymorphisms in some key genes have been reported for their association with beef quality traits. The sequencing of the bovine genome has dramatically increased the number of available gene polymorphisms. The association of these new polymorphisms with the variability in beef quality (e.g. tenderness, marbling) for different breeds in different rearing systems will be a very important issue. For rearing purposes, global gene expression profiling at the mRNA or protein level has already shown that previously unsuspected genes may be associated either with muscle development or growth, and may lead to the development of new molecular indicators of tenderness or marbling. Some of these genes are specifically regulated by genetic and nutritional factors or differ between different beef cuts. In recognition of the potential economic benefits of genomics, public institutions in association with the beef industry are developing livestock genomics projects around the world. From the scientific, technical and economical points of view, genomics is thus reshaping research on beef quality.  相似文献   

3.
Functional genomics is inundating the pharmaceutical industry with large numbers of potential gene targets from several sources such as gene expression profiling experiments (DNA microchips, proteomics) or database mining. Oligonucleotide-based RNA-knock down technologies such as antisense or RNA interference can aid in the filtering and prioritization of target candidates in the drug discovery process.  相似文献   

4.
Proteomics has become an important approach for investigating cellular processes and network functions. Significant improvements have been made during the last few years in technologies for high-throughput proteomics, both at the level of data analysis software and mass spectrometry hardware. As proteomics technologies advance and become more widely accessible, efforts of cataloguing and quantifying full proteomes are underway to complement other genomics approaches, such as RNA and metabolite profiling. Of particular interest is the application of proteome data to improve genome annotation and to include information on post-translational protein modifications with the annotation of the corresponding gene. This type of analysis requires a paradigm shift because amino acid sequences must be assigned to peptides without relying on existing protein databases. In this review, advances and current limitations of full proteome analysis are briefly highlighted using the model plant Arabidopsis thaliana as an example. Strategies to identify peptides are also discussed on the basis of MS/MS data in a protein database-independent approach.  相似文献   

5.
The functioning of even a simple biological system is much more complicated than the sum of its genes, proteins and metabolites. A premise of systems biology is that molecular profiling will facilitate the discovery and characterization of important disease pathways. However, as multiple levels of effector pathway regulation appear to be the norm rather than the exception, a significant challenge presented by high-throughput genomics and proteomics technologies is the extraction of the biological implications of complex data. Thus, integration of heterogeneous types of data generated from diverse global technology platforms represents the first challenge in developing the necessary foundational databases needed for predictive modelling of cell and tissue responses. Given the apparent difficulty in defining the correspondence between gene expression and protein abundance measured in several systems to date, how do we make sense of these data and design the next experiment? In this review, we highlight current approaches and challenges associated with integration and analysis of heterogeneous data sets, focusing on global analysis obtained from high-throughput technologies.  相似文献   

6.
Abstract

Functional genomics is inundating the pharmaceutical industry with large numbers of potential gene targets from several sources such as gene expression profiling experiments (DNA microchips, proteomics) or database mining. Oligonucleotide-based RNA-knock down technologies such as antisense or RNA interference can aid in the filtering and prioritization of target candidates in the drug discovery process.  相似文献   

7.
A significant difficulty faced by the pharmaceutical industry is the initial identification and selection of macromolecular targets upon which de novo drug discovery programs can be initiated. A drug target should have several characteristics: known biological function; robust assay systems for in vitro characterization and high-throughput screening; and be specifically modified by and accessible to small molecular weight compounds in vivo. Ion channels have many of these attributes and can be viewed as suitable targets for small molecule drugs. Potassium (K+) ion channels form a large and diverse gene family responsible for critical functions in numerous cell types, tissues and organs. Recent discoveries, facilitated by genomics technologies combined with advanced biophysical characterization methods, have identified novel K+ channels that are involved in important physiologic processes, or mutated in human inherited disease. These findings, coupled with a rapidly growing body of information regarding modulatory channel subunits and high resolution channel structures, are providing the critical information necessary for validation of K+ channels as drug targets.  相似文献   

8.
9.
Microarrays: handling the deluge of data and extracting reliable information   总被引:13,自引:0,他引:13  
Application of powerful, high-throughput genomics technologies is becoming more common and these technologies are evolving at a rapid pace. Genomics facilities are being established in major research institutions to produce inexpensive, customized cDNA microarrays that are accessible to researchers in a broad range of fields. These high-throughput platforms have generated a massive onslaught of data, which threatens to overwhelm researchers. Although microarrays show great promise, the technology has not matured to the point of consistently generating robust and reliable data when used in the average laboratory. This article addresses several aspects related to the handling of the deluge of microarray data and extracting reliable information from these data. We review the essential elements of data acquisition, data processing and data analysis, and briefly discuss issues related to the quality, validation and storage of data. Our goal is to point out some of the problems that must be overcome before this promising technology can achieve its full potential.  相似文献   

10.
Recent advances in genomics and proteomics have generated a change in emphasis from hypothesis-based to discovery-based investigations. Genomic and proteomic studies based on differential expression microarrays or comparative proteomics often provide many potential candidates for functionally important roles in normal and diseased cells. High throughput technologies to address protein and gene function in situ are still necessary to exploit these emerging advances in gene and protein discovery in order to validate these identified targets. The pharmaceutical industry is particularly interested in target validation, and has identified it as the critical early step in drug discovery. An especially powerful approach to target validation is a direct protein knockdown strategy called chromophore-assisted laser inactivation (CALI) which is a means of testing the role of specific proteins in particular cellular processes. Recent developments in CALI allow for its high throughput application to address many proteins in tandem. Thus, CALI may have applications for high throughput hypothesis testing, target validation or proteome-wide screening.  相似文献   

11.
One of the major challenges facing the pharmaceutical industry today is finding new ways to increase productivity, decrease costs whilst still ultimately developing new therapies that enhance human health. To help address these challenges the utilisation of analytical technologies and high-throughput automated platforms has been employed; in order to perform more experiments in a shorter time frame with increased data quality. One of the main in vitro techniques to assess new chemical entities in a discovery setting has been the use of recombinant liver enzymes, microsomes and hepatocytes. These techniques can help predict in vivo metabolism, clearance and potential drug–drug interactions of these new compounds by cytochrome P450s (the major drug metabolising enzymes). This in vitro methodology has been totally transformed in recent times by the use of automated liquid handling and HPLC tandem mass spectrometry detection techniques (LC–MS/MS). This review aims looking at recent advances in the methodology used to investigate drug metabolism by cytochrome P450s; including an up to date summary of high-throughput platforms including the use of automation and LC–MS/MS to facilitate greater throughput, chromatographic resolution and data quality.  相似文献   

12.
The last fifteen years have witnessed a major strategic shift in drug discovery away from an empiric approach based on incremental improvements of proven therapies, to a more theoretical, target-based approach. This arose as a consequence of three technical advances: (1) generation and interpretation of genome sequences, which facilitated identification and characterization of potential drug targets; (2) efficient production of candidate ligands for these putative targets through combinatorial chemistry or generation of monoclonal antibodies; and (3) high-throughput screening for rapid evaluation of interactions of these putative ligands with the selected targets. The basic idea underlying all three of these technologies is in keeping with Marshall Nirenberg’s dictum that science progresses best when there are simple assays capable of generating large data sets rapidly. Furthermore, practical implementation of target-based drug discovery was enabled directly by technologies that either were originated or nurtured by Marshall, his post-docs and fellows. Chief among these was the genetic code. Also important was adoption of clonal cell lines for pharmacological investigations, as well as the use of hybridomas to generate molecular probes that allowed physical purchase on signaling elements that had previously been only hypothetical constructs. Always the pure scientist, Marshall’s contributions nevertheless enabled fruitful applications in the pharmaceutical industry, several of them by his trainees. Both the successes and the shortcomings of target-based drug discovery are worthy of consideration, as are its implications for the choices of therapeutic goals and modalities by the pharmaceutical industry.  相似文献   

13.
DNA microarray technologies have evolved rapidly to become a key high-throughput technology for the simultaneous measurement of the relative expression levels of thousands of individual genes. However, despite the widespread adoption of DNA microarray technology, there remains considerable uncertainty and scepticism regarding data obtained using these technologies. Comparing results from seemingly identical experiments from different laboratories or even from different days can prove challenging; these challenges increase further when data from different array platforms need to be compared. To comply with emerging regulations, the quality of the data generated from array experiments needs to be clearly demonstrated. This review describes several initiatives that aim to improve confidence in data generated by array experiments, including initiatives to develop standards for data reporting and storage, external spike-in controls, quality control procedures, best practice guidelines, and quality metrics.  相似文献   

14.
Patel RK  Jain M 《PloS one》2012,7(2):e30619
Next generation sequencing (NGS) technologies provide a high-throughput means to generate large amount of sequence data. However, quality control (QC) of sequence data generated from these technologies is extremely important for meaningful downstream analysis. Further, highly efficient and fast processing tools are required to handle the large volume of datasets. Here, we have developed an application, NGS QC Toolkit, for quality check and filtering of high-quality data. This toolkit is a standalone and open source application freely available at http://www.nipgr.res.in/ngsqctoolkit.html. All the tools in the application have been implemented in Perl programming language. The toolkit is comprised of user-friendly tools for QC of sequencing data generated using Roche 454 and Illumina platforms, and additional tools to aid QC (sequence format converter and trimming tools) and analysis (statistics tools). A variety of options have been provided to facilitate the QC at user-defined parameters. The toolkit is expected to be very useful for the QC of NGS data to facilitate better downstream analysis.  相似文献   

15.
16.
手性醇是合成许多光学活性药物、农用化学品及其他精细化学品的关键手性砌块。羰基生物还原法理论上可实现100%转化率,且反应条件温和,对环境十分友好,被普遍认为是生产手性醇的绿色、高效途径。综述了近年来利用生物信息学、高通量筛选和蛋白质工程的发展对新型、高效生物催化剂开发的影响,特别是利用相关技术手段开发羰基还原酶的进展。  相似文献   

17.
The use of parallel synthesis techniques with statistical design of experiment (DoE) methods is a powerful combination for the optimization of chemical processes. Advances in parallel synthesis equipment and easy to use software for statistical DoE have fueled a growing acceptance of these techniques in the pharmaceutical industry. As drug candidate structures become more complex at the same time that development timelines are compressed, these enabling technologies promise to become more important in the future.  相似文献   

18.
Protein phosphorylation events are key regulators of cellular signaling processes. In the era of functional genomics, rational drug design programs demand large-scale high-throughput analysis of signal transduction cascades. Significant improvements in the area of mass spectrometry-based proteomics have provided exciting opportunities for rapid progress toward global protein phosphorylation analysis. This review summarizes several recent advances made in the field of phosphoproteomics with an emphasis placed on mass spectrometry instrumentation, enrichment methods and quantification strategies. In the near future, these technologies will provide a tool that can be used for quantitative investigation of signal transduction pathways to generate new insights into biologic systems.  相似文献   

19.
Protein phosphorylation events are key regulators of cellular signaling processes. In the era of functional genomics, rational drug design programs demand large-scale high-throughput analysis of signal transduction cascades. Significant improvements in the area of mass spectrometry-based proteomics have provided exciting opportunities for rapid progress toward global protein phosphorylation analysis. This review summarizes several recent advances made in the field of phosphoproteomics with an emphasis placed on mass spectrometry instrumentation, enrichment methods and quantification strategies. In the near future, these technologies will provide a tool that can be used for quantitative investigation of signal transduction pathways to generate new insights into biologic systems.  相似文献   

20.
从50年前英国科学家解析出第一个蛋白质晶体结构以来,蛋白质晶体学历经数个里程碑式的发展,已经成为一门成熟的高科技学科,是结构生物学的主要研究手段。近年来结构生物学发展迅速并和其他学科相互渗透交叉,特别是受到结构基因组学等热点学科的极大带动。作为结构生物学的基本手段和技术,蛋白质晶体学从解析简单的蛋白质三维结构延伸到解决各类生物大分子及复合物结构,并更加注重研究结构与功能之间的相互关系,派生出诸如基于结构的药物设计等应用性很强的分支。生物技术及计算机技术的飞速发展,尤其是高通量技术在生物学领域的应用,为蛋白质晶体学带来了全新的概念和更加广阔的前景。文章将主要介绍蛋白质晶体学技术的一些历史发展以及对未来的展望。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号