首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.

Background  

New "next generation" DNA sequencing technologies offer individual researchers the ability to rapidly generate large amounts of genome sequence data at dramatically reduced costs. As a result, a need has arisen for new software tools for storage, management and analysis of genome sequence data. Although bioinformatic tools are available for the analysis and management of genome sequences, limitations still remain. For example, restrictions on the submission of data and use of these tools may be imposed, thereby making them unsuitable for sequencing projects that need to remain in-house or proprietary during their initial stages. Furthermore, the availability and use of next generation sequencing in industrial, governmental and academic environments requires biologist to have access to computational support for the curation and analysis of the data generated; however, this type of support is not always immediately available.  相似文献   

4.

Background  

Whole exome capture sequencing allows researchers to cost-effectively sequence the coding regions of the genome. Although the exome capture sequencing methods have become routine and well established, there is currently a lack of tools specialized for variant calling in this type of data.  相似文献   

5.

Background  

One of the consequences of the rapid and widespread adoption of high-throughput experimental technologies is an exponential increase of the amount of data produced by genome-wide experiments. Researchers increasingly need to handle very large volumes of heterogeneous data, including both the data generated by their own experiments and the data retrieved from publicly available repositories of genomic knowledge. Integration, exploration, manipulation and interpretation of data and information therefore need to become as automated as possible, since their scale and breadth are, in general, beyond the limits of what individual researchers and the basic data management tools in normal use can handle. This paper describes Genephony, a tool we are developing to address these challenges.  相似文献   

6.

Background  

The amount of data on protein-protein interactions (PPIs) available in public databases and in the literature has rapidly expanded in recent years. PPI data can provide useful information for researchers in pharmacology and medicine as well as those in interactome studies. There is urgent need for a novel methodology or software allowing the efficient utilization of PPI data in pharmacology and medicine.  相似文献   

7.

Background

The Tissue Microarray (TMA) facilitates high-throughput analysis of hundreds of tissue specimens simultaneously. However, bottlenecks in the storage and manipulation of the data generated from TMA reviews have become apparent. A number of software applications have been developed to assist in image and data management; however no solution currently facilitates the easy online review, scoring and subsequent storage of images and data associated with TMA experimentation.

Results

This paper describes the design, development and validation of the Virtual Tissue Matrix (VTM). Through an intuitive HTML driven user interface, the VTM provides digital/virtual slide based images of each TMA core and a means to record observations on each TMA spot. Data generated from a TMA review is stored in an associated relational database, which facilitates the use of flexible scoring forms. The system allows multiple users to record their interpretation of each TMA spot for any parameters assessed. Images generated for the VTM were captured using a standard background lighting intensity and corrective algorithms were applied to each image to eliminate any background lighting hue inconsistencies or vignetting. Validation of the VTM involved examination of inter-and intra-observer variability between microscope and digital TMA reviews. Six bladder TMAs were immunohistochemically stained for E-Cadherin, β-Catenin and PhosphoMet and were assessed by two reviewers for the amount of core and tumour present, the amount and intensity of membrane, cytoplasmic and nuclear staining.

Conclusion

Results show that digital VTM images are representative of the original tissue viewed with a microscope. There were equivalent levels of inter-and intra-observer agreement for five out of the eight parameters assessed. Results also suggest that digital reviews may correct potential problems experienced when reviewing TMAs using a microscope, for example, removal of background lighting variance and tint, and potential disorientation of the reviewer, which may have resulted in the discrepancies evident in the remaining three parameters.  相似文献   

8.

Background  

There is a large amount of gene expression data that exists in the public domain. This data has been generated under a variety of experimental conditions. Unfortunately, these experimental variations have generally prevented researchers from accurately comparing and combining this wealth of data, which still hides many novel insights.  相似文献   

9.

Background  

Current genomic research methods provide researchers with enormous amounts of data. Combining data from different high-throughput research technologies commonly available in biological databases can lead to novel findings and increase research efficiency. However, combining data from different heterogeneous sources is often a very arduous task. These sources can be different microarray technology platforms, genomic databases, or experiments performed on various species. Our aim was to develop a software program that could facilitate the combining of data from heterogeneous sources, and thus allow researchers to perform genomic cross-platform/cross-species studies and to use existing experimental data for compendium studies.  相似文献   

10.
Annotation and query of tissue microarray data using the NCI Thesaurus   总被引:1,自引:0,他引:1  

Background  

The Stanford Tissue Microarray Database (TMAD) is a repository of data serving a consortium of pathologists and biomedical researchers. The tissue samples in TMAD are annotated with multiple free-text fields, specifying the pathological diagnoses for each sample. These text annotations are not structured according to any ontology, making future integration of this resource with other biological and clinical data difficult.  相似文献   

11.

Background  

In research laboratories using DNA-microarrays, usually a number of researchers perform experiments, each generating possible sources of error. There is a need for a quick and robust method to assess data quality and sources of errors in DNA-microarray experiments. To this end, a novel and cost-effective validation scheme was devised, implemented, and employed.  相似文献   

12.

Background  

The increasing complexity of genomic data presents several challenges for biologists. Limited computer monitor views of data complexity and the dynamic nature of data in the midst of discovery increase the challenge of integrating experimental results with information resources. The use of Gene Ontology enables researchers to summarize results of quantitative analyses in this framework, but the limitations of typical browser presentation restrict data access.  相似文献   

13.

Background  

The volume of data available on genetic variations has increased considerably with the recent development of high-density, single-nucleotide polymorphism (SNP) arrays. Several software programs have been developed to assist researchers in the analysis of this huge amount of data, but few can rely upon a whole genome variability visualisation system that could help data interpretation.  相似文献   

14.

Background  

With the amount of influenza genome sequence data growing rapidly, researchers need machine assistance in selecting datasets and exploring the data. Enhanced visualization tools are required to represent results of the exploratory analysis on the web in an easy-to-comprehend form and to facilitate convenient information retrieval.  相似文献   

15.

Background  

Censored data are increasingly common in many microarray studies that attempt to relate gene expression to patient survival. Several new methods have been proposed in the last two years. Most of these methods, however, are not available to biomedical researchers, leading to many re-implementations from scratch of ad-hoc, and suboptimal, approaches with survival data.  相似文献   

16.

Background  

For effective exposition of biological information, especially with regard to analysis of large-scale data types, researchers need immediate access to multiple categorical knowledge bases and need summary information presented to them on collections of genes, as opposed to the typical one gene at a time.  相似文献   

17.
Data sharing by scientists: practices and perceptions   总被引:10,自引:0,他引:10  

Background

Scientific research in the 21st century is more data intensive and collaborative than in the past. It is important to study the data practices of researchers – data accessibility, discovery, re-use, preservation and, particularly, data sharing. Data sharing is a valuable part of the scientific method allowing for verification of results and extending research from prior results.

Methodology/Principal Findings

A total of 1329 scientists participated in this survey exploring current data sharing practices and perceptions of the barriers and enablers of data sharing. Scientists do not make their data electronically available to others for various reasons, including insufficient time and lack of funding. Most respondents are satisfied with their current processes for the initial and short-term parts of the data or research lifecycle (collecting their research data; searching for, describing or cataloging, analyzing, and short-term storage of their data) but are not satisfied with long-term data preservation. Many organizations do not provide support to their researchers for data management both in the short- and long-term. If certain conditions are met (such as formal citation and sharing reprints) respondents agree they are willing to share their data. There are also significant differences and approaches in data management practices based on primary funding agency, subject discipline, age, work focus, and world region.

Conclusions/Significance

Barriers to effective data sharing and preservation are deeply rooted in the practices and culture of the research process as well as the researchers themselves. New mandates for data management plans from NSF and other federal agencies and world-wide attention to the need to share and preserve data could lead to changes. Large scale programs, such as the NSF-sponsored DataNET (including projects like DataONE) will both bring attention and resources to the issue and make it easier for scientists to apply sound data management principles.  相似文献   

18.

Background  

The Hawaiian red algal flora is diverse, isolated, and well studied from a morphological and anatomical perspective, making it an excellent candidate for assessment using a combination of traditional taxonomic and molecular approaches. Acquiring and making these biodiversity data freely available in a timely manner ensures that other researchers can incorporate these baseline findings into phylogeographic studies of Hawaiian red algae or red algae found in other locations.  相似文献   

19.
20.
High-throughput sequence alignment using Graphics Processing Units   总被引:1,自引:0,他引:1  

Background  

The recent availability of new, less expensive high-throughput DNA sequencing technologies has yielded a dramatic increase in the volume of sequence data that must be analyzed. These data are being generated for several purposes, including genotyping, genome resequencing, metagenomics, and de novo genome assembly projects. Sequence alignment programs such as MUMmer have proven essential for analysis of these data, but researchers will need ever faster, high-throughput alignment tools running on inexpensive hardware to keep up with new sequence technologies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号