首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Next‐generation sequencing (NGS) technology is revolutionizing the fields of population genetics, molecular ecology and conservation biology. But it can be challenging for researchers to learn the new and rapidly evolving techniques required to use NGS data. A recent workshop entitled ‘Population Genomic Data Analysis’ was held to provide training in conceptual and practical aspects of data production and analysis for population genomics, with an emphasis on NGS data analysis. This workshop brought together 16 instructors who were experts in the field of population genomics and 31 student participants. Instructors provided helpful and often entertaining advice regarding how to choose and use a NGS method for a given research question, and regarding critical aspects of NGS data production and analysis such as library preparation, filtering to remove sequencing errors and outlier loci, and genotype calling. In addition, instructors provided general advice about how to approach population genomics data analysis and how to build a career in science. The overarching messages of the workshop were that NGS data analysis should be approached with a keen understanding of the theoretical models underlying the analyses, and with analyses tailored to each research question and project. When analysed carefully, NGS data provide extremely powerful tools for answering crucial questions in disciplines ranging from evolution and ecology to conservation and agriculture, including questions that could not be answered prior to the development of NGS technology.  相似文献   

3.
Single-stranded genomic DNA of recombinant M13 phages was tested as an antisense molecule and examined for its usefulness in high-throughput functional genomics. cDNA fragments of various genes (TNF-alpha, c-myc, c-myb, cdk2 and cdk4) were independently cloned into phagemid vectors. Using the life cycle of M13 bacteriophages, large circular (LC)-molecules, antisense to their respective genes, were prepared from the culture supernatant of bacterial transformants. LC-antisense molecules exhibited enhanced stability, target specificity and no need for target-site searches. High-throughput functional genomics was then attempted with an LC-antisense library, which was generated by using a phagemid vector that incorporated a unidirectional subtracted cDNA library derived from liver cancer tissue. We identified 56 genes involved in the growth of these cells. These results indicate that an antisense sequence as a part of single-stranded LC-genomic DNA of recombinant M13 phages exhibits effective antisense activity, and may have potential for high-throughput functional genomics.  相似文献   

4.
5.
6.
The sequence infrastructure that has arisen through large-scale genomic projects dedicated to protein analysis, has provided a wealth of information and brought together scientists and institutions from all over the world. As a consequence, the development of novel technologies and methodologies in proteomics research is helping to unravel the biochemical and physiological mechanisms of complex multivariate diseases at both a functional and molecular level. In the late sixties, when X-ray crystallography had just been established, the idea of determining protein structure on an almost universal basis was akin to an impossible dream or a miracle. Yet only forty years after, automated protein structure determination platforms have been established. The widespread use of robotics in protein crystallography has had a huge impact at every stage of the pipeline from protein cloning, over-expression, purification, crystallization, data collection, structure solution, refinement, validation and data management- all of which have become more or less automated with minimal human intervention necessary. Here, recent advances in protein crystal structure analysis in the context of structural genomics will be discussed. In addition, this review aims to give an overview of recent developments in high throughput instrumentation, and technologies and strategies to accelerate protein structure/function analysis.  相似文献   

7.
Cryopreservation of fish sperm has been studied for decades at a laboratory (research) scale. However, high-throughput cryopreservation of fish sperm has recently been developed to enable industrial-scale production. This study treated blue catfish (Ictalurus furcatus) sperm high-throughput cryopreservation as a manufacturing production line and initiated quality assurance plan development. The main objectives were to identify: (1) the main production quality characteristics; (2) the process features for quality assurance; (3) the internal quality characteristics and their specification designs; (4) the quality control and process capability evaluation methods, and (5) the directions for further improvements and applications. The essential product quality characteristics were identified as fertility-related characteristics. Specification design which established the tolerance levels according to demand and process constraints was performed based on these quality characteristics. Meanwhile, to ensure integrity throughout the process, internal quality characteristics (characteristics at each quality control point within process) that could affect fertility-related quality characteristics were defined with specifications. Due to the process feature of 100% inspection (quality inspection of every fish), a specific calculation method, use of cumulative sum (CUSUM) control charts, was applied to monitor each quality characteristic. An index of overall process evaluation, process capacity, was analyzed based on in-control process and the designed specifications, which further integrates the quality assurance plan. With the established quality assurance plan, the process could operate stably and quality of products would be reliable.  相似文献   

8.
9.

Background

The basis for correctly assessing the burden of parasitic infections and the effects of interventions relies on a somewhat shaky foundation as long as we do not know how reliable the reported laboratory findings are. Thus virtual microscopy, successfully introduced as a histopathology tool, has been adapted for medical parasitology.

Methodology/Principal Findings

Specimens containing parasites in tissues, stools, and blood have been digitized and made accessible as a “webmicroscope for parasitology” (WMP) on the Internet (http://www.webmicroscope.net/parasitology).These digitized specimens can be viewed (“navigated” both in the x-axis and the y-axis) at the desired magnification by an unrestricted number of individuals simultaneously. For virtual microscopy of specimens containing stool parasites, it was necessary to develop the technique further in order to enable navigation in the z plane (i.e., “focusing”). Specimens were therefore scanned and photographed in two or more focal planes. The resulting digitized specimens consist of stacks of laterally “stiched” individual images covering the entire area of the sample photographed at high magnification. The digitized image information (∼10 GB uncompressed data per specimen) is accessible at data transfer speeds from 2 to 10 Mb/s via a network of five image servers located in different parts of Europe. Image streaming and rapid data transfer to an ordinary personal computer makes web-based virtual microscopy similar to conventional microscopy.

Conclusion/Significance

The potential of this novel technique in the field of medical parasitology to share identical parasitological specimens means that we can provide a “gold standard”, which can overcome several problems encountered in quality control of diagnostic parasitology. Thus, the WMP may have an impact on the reliability of data, which constitute the basis for our understanding of the vast problem of neglected tropical diseases. The WMP can be used also in the absence of a fast Internet communication. An ordinary PC, or even a laptop, may function as a local image server, e.g., in health centers in tropical endemic areas.  相似文献   

10.
11.
P R Ashton 《Acta cytologica》1989,33(4):451-454
The data produced by membership surveys undertaken by the American Society for Cytotechnology that pertain to the quality diagnosis of Papanicolaou smears are reviewed. The results documented the parameters in such areas as workload, quality control, health and safety, continuing education, productivity measurement, reporting systems and salaries and helped to pinpoint problems in these areas. Some of these problems are being addressed through cooperative efforts with other groups concerned with improving the diagnostic accuracy of cytopathology.  相似文献   

12.
Protocols for the assurance of microarray data quality and process control   总被引:3,自引:0,他引:3  
Microarrays represent a powerful technology that provides the ability to simultaneously measure the expression of thousands of genes. However, it is a multi-step process with numerous potential sources of variation that can compromise data analysis and interpretation if left uncontrolled, necessitating the development of quality control protocols to ensure assay consistency and high-quality data. In response to emerging standards, such as the minimum information about a microarray experiment standard, tools are required to ascertain the quality and reproducibility of results within and across studies. To this end, an intralaboratory quality control protocol for two color, spotted microarrays was developed using cDNA microarrays from in vivo and in vitro dose-response and time-course studies. The protocol combines: (i) diagnostic plots monitoring the degree of feature saturation, global feature and background intensities, and feature misalignments with (ii) plots monitoring the intensity distributions within arrays with (iii) a support vector machine (SVM) model. The protocol is applicable to any laboratory with sufficient datasets to establish historical high- and low-quality data.  相似文献   

13.
KEGGanim: pathway animations for high-throughput data   总被引:1,自引:0,他引:1  
MOTIVATION: Gene expression analysis with microarrays has become one of the most widely used high-throughput methods for gathering genome-wide functional data. Emerging -omics fields such as proteomics and interactomics introduce new information sources. With the rise of systems biology, researchers need to concentrate on entire complex pathways that guide individual genes and related processes. Bioinformatics methods are needed to link the existing knowledge about pathways with the growing amounts of experimental data. RESULTS: We present KEGGanim, a novel web-based tool for visualizing experimental data in biological pathways. KEGGanim produces animations and images of KEGG pathways using public or user uploaded high-throughput data. Pathway members are coloured according to experimental measurements, and animated over experimental conditions. KEGGanim visualization highlights dynamic changes over conditions and allows the user to observe important modules and key genes that influence the pathway. The simple user interface of KEGGanim provides options for filtering genes and experimental conditions. KEGGanim may be used with public or private data for 14 organisms with a large collection of public microarray data readily available. Most common gene and protein identifiers and microarray probesets are accepted for visualization input. AVAILABILITY: http://biit.cs.ut.ee/KEGGanim/.  相似文献   

14.
Combinatorial chemistry and high-throughput screening have become standard tools for discovering new drug candidates with suitable pharmacological properties. Now, those same technologies are starting to be applied to the problem of discovering novel in vivo imaging agents. Important differences in the biological and pharmacological properties needed for imaging agents, compared to those for a therapeutic agent, require new screening methods that emphasize those characteristics, such as optimized residence time and tissue specificity, that make for a good imaging agent candidate.  相似文献   

15.
MOTIVATION: The presentation of genomics data in a perspicuous visual format is critical for its rapid interpretation and validation. Relatively few public database developers have the resources to implement sophisticated front-end user interfaces themselves. Accordingly, these developers would benefit from a reusable toolkit of user interface and data visualization components. RESULTS: We have designed the bioWidget toolkit as a set of JavaBean components. It includes a wide array of user interface components and defines an architecture for assembling applications. The toolkit is founded on established software engineering design patterns and principles, including componentry, Model-View-Controller, factored models and schema neutrality. As a proof of concept, we have used the bioWidget toolkit to create three extendible applications: AnnotView, BlastView and AlignView.  相似文献   

16.
High-throughput screening (HTS) is used in modern drug discovery to screen hundreds of thousands to millions of compounds on selected protein targets. It is an industrial-scale process relying on sophisticated automation and state-of-the-art detection technologies. Quality control (QC) is an integral part of the process and is used to ensure good quality data and mini mize assay variability while maintaining assay sensitivity. The authors describe new QC methods and show numerous real examples from their biologist-friendly Stat Server HTS application, a custom-developed software tool built from the commercially available S-PLUS and Stat Server statistical analysis and server software. This system remotely processes HTS data using powerful and sophisticated statistical methodology but insulates users from the technical details by outputting results in a variety of readily interpretable graphs and tables. It allows users to visualize HTS data and examine assay performance during the HTS campaign to quickly react to or avoid quality problems.  相似文献   

17.
Several subsampling-based normalization strategies were applied to different high-throughput sequencing data sets originating from human and murine gut environments. Their effects on the data sets' characteristics and normalization efficiencies, as measured by several β-diversity metrics, were compared. For both data sets, subsampling to the median rather than the minimum number appeared to improve the analysis.  相似文献   

18.
A novel and simple methodology for the detection of phosphinothricin produced by a biosynthesis approach in 96-well microtiter plates was developed based on the fluorometric determination of an isoindole derivative. The assay method to determine the generation of L-phosphinothricin was conducted with the help of a derivatization reaction. The linear detection range of the method was demonstrated to be from 0.74 to 100 μg ml−1 for the catalysis by resting cells and from 1.6 to 100 μg ml-1 for the catalysis by secretory enzymes. Meanwhile, the value of the relative standard deviation was less than 2.2% and the recovery was 99–101% of true value. Based on the method constructed in this study, one bacterial strain, Kluyvera cryocrescens ZJB-17005, with high stereoselectivity (>99%) and excellent yield (45%) was isolated from 13,284 strains in the production of L-phosphinothricin by a biosynthesis approach.  相似文献   

19.
20.
As an emerging field, MS-based proteomics still requires software tools for efficiently storing and accessing experimental data. In this work, we focus on the management of LC–MS data, which are typically made available in standard XML-based portable formats. The structures that are currently employed to manage these data can be highly inefficient, especially when dealing with high-throughput profile data. LC–MS datasets are usually accessed through 2D range queries. Optimizing this type of operation could dramatically reduce the complexity of data analysis. We propose a novel data structure for LC–MS datasets, called mzRTree, which embodies a scalable index based on the R-tree data structure. mzRTree can be efficiently created from the XML-based data formats and it is suitable for handling very large datasets. We experimentally show that, on all range queries, mzRTree outperforms other known structures used for LC–MS data, even on those queries these structures are optimized for. Besides, mzRTree is also more space efficient. As a result, mzRTree reduces data analysis computational costs for very large profile datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号