首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Proteomics has rapidly become an important tool for life science research, allowing the integrated analysis of global protein expression from a single experiment. To accommodate the complexity and dynamic nature of any proteome, researchers must use a combination of disparate protein biochemistry techniques, often a highly involved and time-consuming process. Whilst highly sophisticated, individual technologies for each step in studying a proteome are available, true high-throughput proteomics that provides a high degree of reproducibility and sensitivity has been difficult to achieve. The development of high-throughput proteomic platforms, encompassing all aspects of proteome analysis and integrated with genomics and bioinformatics technology, therefore represents a crucial step for the advancement of proteomics research. ProteomIQ (Proteome Systems) is the first fully integrated, start-to-finish proteomics platform to enter the market. Sample preparation and tracking, centralized data acquisition and instrument control, and direct interfacing with genomics and bioinformatics databases are combined into a single suite of integrated hardware and software tools, facilitating high reproducibility and rapid turnaround times. This review will highlight some features of ProteomIQ, with particular emphasis on the analysis of proteins separated by 2D polyacrylamide gel electrophoresis.  相似文献   

2.
Proteomics has rapidly become an important tool for life science research, allowing the integrated analysis of global protein expression from a single experiment. To accommodate the complexity and dynamic nature of any proteome, researchers must use a combination of disparate protein biochemistry techniques, often a highly involved and time-consuming process. Whilst highly sophisticated, individual technologies for each step in studying a proteome are available, true high-throughput proteomics that provides a high degree of reproducibility and sensitivity has been difficult to achieve. The development of high-throughput proteomic platforms, encompassing all aspects of proteome analysis and integrated with genomics and bioinformatics technology, therefore represents a crucial step for the advancement of proteomics research. ProteomIQ? (Proteome Systems) is the first fully integrated, start-to-finish proteomics platform to enter the market. Sample preparation and tracking, centralized data acquisition and instrument control, and direct interfacing with genomics and bioinformatics databases are combined into a single suite of integrated hardware and software tools, facilitating high reproducibility and rapid turnaround times. This review will highlight some features of ProteomIQ, with particular emphasis on the analysis of proteins separated by 2D polyacrylamide gel electrophoresis.  相似文献   

3.
High-throughput sequencing studies (HTS) have been highly successful in identifying the genetic causes of human disease, particularly those following Mendelian inheritance. Many HTS studies to date have been performed without utilizing available family relationships between samples. Here, we discuss the many merits and occasional pitfalls of using identity by descent information in conjunction with HTS studies. These methods are not only applicable to family studies but are also useful in cohorts of apparently unrelated, ‘sporadic’ cases and small families underpowered for linkage and allow inference of relationships between individuals. Incorporating familial/pedigree information not only provides powerful filtering options for the extensive variant lists that are usually produced by HTS but also allows valuable quality control checks, insights into the genetic model and the genotypic status of individuals of interest. In particular, these methods are valuable for challenging discovery scenarios in HTS analysis, such as in the study of populations poorly represented in variant databases typically used for filtering, and in the case of poor-quality HTS data.  相似文献   

4.
SUMMARY: Characterizing genetic diversity through genotyping short amplicons is central to evolutionary biology. Next-generation sequencing (NGS) technologies changed the scale at which these type of data are acquired. SESAME is a web application package that assists genotyping of multiplexed individuals for several markers based on NGS amplicon sequencing. It automatically assigns reads to loci and individuals, corrects reads if standard samples are available and provides an intuitive graphical user interface (GUI) for allele validation based on the sequences and associated decision-making tools. The aim of SESAME is to help allele identification among a large number of sequences. AVAILABILITY: SESAME and its documentation are freely available under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported Licence for Windows and Linux from http://www1.montpellier.inra.fr/CBGP/NGS/ or http://tinyurl.com/ngs-sesame.  相似文献   

5.
The ability to generate whole genome data is rapidly becoming commoditized. For example, a mammalian sized genome (~3Gb) can now be sequenced using approximately ten lanes on an Illumina HiSeq 2000. Since lanes from different runs are often combined, verifying that each lane in a genome's build is from the same sample is an important quality control. We sought to address this issue in a post hoc bioinformatic manner, instead of using upstream sample or "barcode" modifications. We rely on the inherent small differences between any two individuals to show that genotype concordance rates can be effectively used to test if any two lanes of HiSeq 2000 data are from the same sample. As proof of principle, we use recent data from three different human samples generated on this platform. We show that the distributions of concordance rates are non-overlapping when comparing lanes from the same sample versus lanes from different samples. Our method proves to be robust even when different numbers of reads are analyzed. Finally, we provide a straightforward method for determining the gender of any given sample. Our results suggest that examining the concordance of detected genotypes from lanes purported to be from the same sample is a relatively simple approach for confirming that combined lanes of data are of the same identity and quality.  相似文献   

6.
Calcineurin is a eukaryotic protein phosphatase important for many signalling and developmental processes in cells. Inhibitors of this enzyme are used clinically and there is interest in identifying novel inhibitors for therapeutic applications. This report describes a high-throughput assay that can be used to screen natural or chemical libraries of compounds to identify new calcineurin inhibitors. The microtitre plate assay is based on a yeast reporter strain and was validated with known inhibitors and tested in a pilot screen of bacterial extracts.  相似文献   

7.
8.
The traditional paradigm for studying the magical number is questioned and a new approach is sought in order to obtain a better conceptual understanding of this phenomenon. Building on earlier work, a theory is proposed whereby the results of an absolute identification experiment can be characterized by a single parameter to a reasonable approximation. This parameter is the variance in the subject's response to a sensory input. By reducing the magical number to a single parameter, we see that the value of the upper limit in information transmission depends not so much on the absolute magnitude of the response error, but actually on how fast this error grows with range. The theory also predicts a little known characteristic of the magical number. If this prediction can be demonstrated experimentally, we shall need to reinterpret the magical number. Received: 12 September 1997 / Accepted in revised form: 13 March 1998  相似文献   

9.
In this study, a genomic library subdivided into fractions was rapidly screened by a Southern detection technique. Deletion libraries were obtained from recovered genomic clones by single random cuts with nuclease S1. These deletion libraries proved useful for localizing genes in the inserts and yielded, after size fractionation, nested deletions suitable for nucleotide sequencing. An heterologous vector (pDB21) carried the insert used as probe for all hybridizations involved in the process of genomic clones isolation and characterization.  相似文献   

10.

Background

Analysis of targeted amplicon sequencing data presents some unique challenges in comparison to the analysis of random fragment sequencing data. Whereas reads from randomly fragmented DNA have arbitrary start positions, the reads from amplicon sequencing have fixed start positions that coincide with the amplicon boundaries. As a result, any variants near the amplicon boundaries can cause misalignments of multiple reads that can ultimately lead to false-positive or false-negative variant calls.

Results

We show that amplicon boundaries are variant calling blind spots where the variant calls are highly inaccurate. We propose that an effective strategy to avoid these blind spots is to incorporate the primer bases in obtaining read alignments and post-processing of the alignments, thereby effectively moving these blind spots into the primer binding regions (which are not used for variant calling). Targeted sequencing data analysis pipelines can provide better variant calling accuracy when primer bases are retained and sequenced.

Conclusions

Read bases beyond the variant site are necessary for analysis of amplicon sequencing data. Enzymatic primer digestion, if used in the target enrichment process, should leave at least a few primer bases to ensure that these bases are available during data analysis. The primer bases should only be removed immediately before the variant calling step to ensure that the variants can be called irrespective of where they occur within the amplicon insert region.

Electronic supplementary material

The online version of this article (doi:10.1186/1471-2164-15-1073) contains supplementary material, which is available to authorized users.  相似文献   

11.
Computational sequencing of nucleic acid and amino acid sequences is placing increasing demands on computer resources. The use of prime numbers is explored as a convenient means of improving program speed and reducing storage requirements. It is concluded that the application of the prime number approach leads to significant increases in speed and some reduction in storage requirements.  相似文献   

12.
A high-throughput screening approach was used to identify new inhibitors of the metallo-protease lethal factor from Bacillus anthracis. A library of approximately 14,000 compounds was screened using a fluorescence-based in vitro assay and hits were further characterized enzymatically via measurements of IC50 and Ki values against a small panel of metallo-proteases. This study led to the identification of new scaffolds that inhibit LF and the Botulinum Neurotoxin Type A in the low micromolar range, while sparing the human metallo-proteases MMP-2 and MMP-9. Therefore, these scaffolds could be further exploited for the development of potent and selective anti-toxin agents.  相似文献   

13.
14.
Genome-wide association studies (GWAS) have been successful in identifying common genetic variation reproducibly associated with disease. However, most associated variants confer very small risk and after meta-analysis of large cohorts a large fraction of expected heritability still remains unexplained. A possible explanation is that rare variants currently undetected by GWAS with SNP arrays could contribute a large fraction of risk when present in cases. This concept has spurred great interest in exploring the role of rare variants in disease. As the cost of sequencing continue to plummet, it is becoming feasible to directly sequence case-control samples for testing disease association including rare variants. We have developed a test statistic that allows for association testing among cases and controls using data directly from sequencing reads. In addition, our method allows for random errors in reads. We determine the probability of a true genotype call based on the observed base pair reads using the expectation-maximization algorithm. We apply the SumStat procedure to obtain a single statistic for a group of multiple rare variant loci. We document the validity of our method through simulations. Our results suggest that our statistic maintains the correct type I error rate, even in the presence of differential misclassification for sequence reads, and that it has good power under a number of scenarios. Finally, our SumStat results show power at least as good as the maximum single locus results.  相似文献   

15.
High-throughput genomic mutation screening for primary tumors has characteristically been expensive, labor-intensive, and inadequate to detect low levels of mutation in a background of wild-type signal. We present a new, combined PCR and colorimetric approach that is inexpensive, simple, and can detect the presence of 1% mutation in a background of wild-type. We compared manual dideoxy sequencing of p53 for eight lung cancer samples to a novel assay combining a primer extension step and an enzymatic colorimetric step in a 96-well plate with covalently attached oligonucleotide sequences. For every sample, we were able to detect the presence or absence of the specific mutation with a statistically significant difference between the sample optical density (OD) and the background OD, with a sensitivity and specificity of 100%. This assay is straightforward, accurate, inexpensive, and allows for rapid, high-throughput analysis of samples, making it ideal for genomic mutation or polymorphism screening studies in both clinical and research settings.  相似文献   

16.
17.
Green fluorescent protein (GFP) is a reporter that has had a significant impact due to its many advantages over other reporter genes: it is autofluorescent, it enables in situ detection, it is relatively small, and it is also nontoxic. By cloning a gene promoter upstream of the gfp gene and exposing the living cells transformed with the fusion to the specific inducer or repressor, gene expression can be real-time monitored by continuous quantitative measurement of the green fluorescence emitted by GFP. In this work, a promoter study using promoter-gfp fusions was conducted in 96-well plates. Because they were placed in an automated incubating and shaking microplate reader, the wells functioned as microscale bioreactors, allowing for parallel experiments and data analysis. In the study described here, an overexpression promoter (pBAD promoter) and two comparatively weak promoters (sodA and acnA in Escherichia coli SoxRS regulon) were studied in both endpoint and kinetics formats. Our results with the pBAD promoter revealed insight on its regulation, which is tightly controlled by levels of arabinose and glucose. Results on weak oxidative stress promoters (for sodA and acnA genes) were striking in that significant induction was observed when they were under a superoxide stress in plates. They both displayed dose-dependent induction to paraquat-generated superoxide anion, with sodA leading acnA in strength and time. These results, spanning highly inducible promoters for protein overexpression and weakly inducible promoters of metabolic interest, demonstrate that the approach is relatively easily executed and can be used for quantitative and temporal promoter studies in a high throughput format.  相似文献   

18.
19.
Abstract

Membrane proteins are intrinsically involved in both human and pathogen physiology, and are the target of 60% of all marketed drugs. During the past decade, advances in the studies of membrane proteins using X-ray crystallography, electron microscopy and NMR-based techniques led to the elucidation of over 250 unique membrane protein crystal structures. The aim of the European Drug Initiative for Channels and Transporter (EDICT) project is to use the structures of clinically significant membrane proteins for the development of lead molecules. One of the approaches used to achieve this is a virtual high-throughput screening (vHTS) technique initially developed for soluble proteins. This paper describes application of this technique to the discovery of inhibitors of the leucine transporter (LeuT), a member of the neurotransmitter:sodium symporter (NSS) family.  相似文献   

20.
A complete mitochondrial (mt) genome sequence was reconstructed from a 38,000 year-old Neandertal individual with 8341 mtDNA sequences identified among 4.8 Gb of DNA generated from approximately 0.3 g of bone. Analysis of the assembled sequence unequivocally establishes that the Neandertal mtDNA falls outside the variation of extant human mtDNAs, and allows an estimate of the divergence date between the two mtDNA lineages of 660,000 +/- 140,000 years. Of the 13 proteins encoded in the mtDNA, subunit 2 of cytochrome c oxidase of the mitochondrial electron transport chain has experienced the largest number of amino acid substitutions in human ancestors since the separation from Neandertals. There is evidence that purifying selection in the Neandertal mtDNA was reduced compared with other primate lineages, suggesting that the effective population size of Neandertals was small.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号