首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

With the advances in DNA sequencer-based technologies, it has become possible to automate several steps of the genotyping process leading to increased throughput. To efficiently handle the large amounts of genotypic data generated and help with quality control, there is a strong need for a software system that can help with the tracking of samples and capture and management of data at different steps of the process. Such systems, while serving to manage the workflow precisely, also encourage good laboratory practice by standardizing protocols, recording and annotating data from every step of the workflow.

Results

A laboratory information management system (LIMS) has been designed and implemented at the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) that meets the requirements of a moderately high throughput molecular genotyping facility. The application is designed as modules and is simple to learn and use. The application leads the user through each step of the process from starting an experiment to the storing of output data from the genotype detection step with auto-binning of alleles; thus ensuring that every DNA sample is handled in an identical manner and all the necessary data are captured. The application keeps track of DNA samples and generated data. Data entry into the system is through the use of forms for file uploads. The LIMS provides functions to trace back to the electrophoresis gel files or sample source for any genotypic data and for repeating experiments. The LIMS is being presently used for the capture of high throughput SSR (simple-sequence repeat) genotyping data from the legume (chickpea, groundnut and pigeonpea) and cereal (sorghum and millets) crops of importance in the semi-arid tropics.

Conclusion

A laboratory information management system is available that has been found useful in the management of microsatellite genotype data in a moderately high throughput genotyping laboratory. The application with source code is freely available for academic users and can be downloaded fromhttp://www.icrisat.org/gt-bt/lims/lims.asp.  相似文献   

2.

Background

The coupling of pathways and processes through shared components is being increasingly recognised as a common theme which occurs in many cell signalling contexts, in which it plays highly non-trivial roles.

Results

In this paper we develop a basic modelling and systems framework in a general setting for understanding the coupling of processes and pathways through shared components. Our modelling framework starts with the interaction of two components with a common third component and includes production and degradation of all these components. We analyze the signal processing in our model to elucidate different aspects of the coupling. We show how different kinds of responses, including "ultrasensitive" and adaptive responses, may occur in this setting. We then build on the basic model structure and examine the effects of additional control regulation, switch-like signal processing, and spatial signalling. In the process, we identify a way in which allosteric regulation may contribute to signalling specificity, and how competitive effects may allow an enzyme to robustly coordinate and time the activation of parallel pathways.

Conclusions

We have developed and analyzed a common systems platform for examining the effects of coupling of processes through shared components. This can be the basis for subsequent expansion and understanding the many biologically observed variations on this common theme.  相似文献   

3.

Background

An adequate and expressive ontological representation of biological organisms and their parts requires formal reasoning mechanisms for their relations of physical aggregation and containment.

Results

We demonstrate that the proposed formalism allows to deal consistently with "role propagation along non-taxonomic hierarchies", a problem which had repeatedly been identified as an intricate reasoning problem in biomedical ontologies.

Conclusion

The proposed approach seems to be suitable for the redesign of compositional hierarchies in (bio)medical terminology systems which are embedded into the framework of the OBO (Open Biological Ontologies) Relation Ontology and are using knowledge representation languages developed by the Semantic Web community.  相似文献   

4.

Introduction

Data processing is one of the biggest problems in metabolomics, given the high number of samples analyzed and the need of multiple software packages for each step of the processing workflow.

Objectives

Merge in the same platform the steps required for metabolomics data processing.

Methods

KniMet is a workflow for the processing of mass spectrometry-metabolomics data based on the KNIME Analytics platform.

Results

The approach includes key steps to follow in metabolomics data processing: feature filtering, missing value imputation, normalization, batch correction and annotation.

Conclusion

KniMet provides the user with a local, modular and customizable workflow for the processing of both GC–MS and LC–MS open profiling data.
  相似文献   

5.

Background

Availability of allograft tympano-ossicular systems (ATOS) provides unique reconstructive capabilities, allowing more radical removal of middle ear pathology. To provide ATOS, the University of Antwerp Temporal Bone Bank (UATB) was established in 1988. ATOS use was stopped in many countries because of safety issues concerning human tissue transplantation. Our objective was to maintain an ATOS tissue bank complying with European Union (EU) directives on human tissues and cells.

Methods

The guidelines of the Belgian Superior Health Council, including EU directive requirements, were rigorously applied to UATB infrastructure, workflow protocols and activity. Workflow protocols were updated and an internal audit was performed to check and improve consistency with established quality systems and changing legislations. The Belgian Federal Agency of Medicines and Health Products performed an inspection to examine compliance with national legislatives and EU directives on human tissues and cells. A sample of important procedures was meticulously examined in its workflow setting next to assessment of the infrastructure and personnel.

Results

Results are reported on infrastructure, personnel, administrative workflow, procurement, preparation, processing, distribution, internal audit and inspection by the competent authority. Donors procured: 2006, 93 (45.1%); 2007, 64 (20.6%); 2008, 56 (13.1%); 2009, 79 (6.9%). The UATB was approved by the Minister of Health without critical or important shortcomings. The Ministry accords registration each time for 2?years.

Conclusions

An ATOS tissue bank complying with EU regulations on human allografts is feasible and critical to assure that the patient receives tissue, which is safe, individually checked and prepared in a suitable environment.  相似文献   

6.

?

We examine the Tree of Life (TOL) as an evolutionary hypothesis and a heuristic. The original TOL hypothesis has failed but a new "statistical TOL hypothesis" is promising. The TOL heuristic usefully organizes data without positing fundamental evolutionary truth.

Reviewers

This article was reviewed by W. Ford Doolittle, Nicholas Galtier and Christophe Malaterre.  相似文献   

7.

Background

Based upon defining a common reference point, current real-time quantitative PCR technologies compare relative differences in amplification profile position. As such, absolute quantification requires construction of target-specific standard curves that are highly resource intensive and prone to introducing quantitative errors. Sigmoidal modeling using nonlinear regression has previously demonstrated that absolute quantification can be accomplished without standard curves; however, quantitative errors caused by distortions within the plateau phase have impeded effective implementation of this alternative approach.

Results

Recognition that amplification rate is linearly correlated to amplicon quantity led to the derivation of two sigmoid functions that allow target quantification via linear regression analysis. In addition to circumventing quantitative errors produced by plateau distortions, this approach allows the amplification efficiency within individual amplification reactions to be determined. Absolute quantification is accomplished by first converting individual fluorescence readings into target quantity expressed in fluorescence units, followed by conversion into the number of target molecules via optical calibration. Founded upon expressing reaction fluorescence in relation to amplicon DNA mass, a seminal element of this study was to implement optical calibration using lambda gDNA as a universal quantitative standard. Not only does this eliminate the need to prepare target-specific quantitative standards, it relegates establishment of quantitative scale to a single, highly defined entity. The quantitative competency of this approach was assessed by exploiting "limiting dilution assay" for absolute quantification, which provided an independent gold standard from which to verify quantitative accuracy. This yielded substantive corroborating evidence that absolute accuracies of ± 25% can be routinely achieved. Comparison with the LinReg and Miner automated qPCR data processing packages further demonstrated the superior performance of this kinetic-based methodology.

Conclusion

Called "linear regression of efficiency" or LRE, this novel kinetic approach confers the ability to conduct high-capacity absolute quantification with unprecedented quality control capabilities. The computational simplicity and recursive nature of LRE quantification also makes it amenable to software implementation, as demonstrated by a prototypic Java program that automates data analysis. This in turn introduces the prospect of conducting absolute quantification with little additional effort beyond that required for the preparation of the amplification reactions.  相似文献   

8.

Background

Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts.

Results

In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure.

Conclusions

Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system can be accessed either through a cloud-enabled web-interface or downloaded and installed to run within the user's local environment. All resources related to Tavaxy are available at http://www.tavaxy.org.  相似文献   

9.
10.
11.

Background

The Global Programme to Eliminate Lymphatic Filariasis (GPELF) depends upon Mass Drug Administration (MDA) to interrupt transmission. Therefore, delimitation of transmission risk areas is an important step, and hence we attempted to define a geo-environmental risk model (GERM) for determining the areas of potential transmission of lymphatic filariasis.

Methods

A range of geo-environmental variables has been selected, and customized on GIS platform to develop GERM for identifying the areas of filariasis transmission in terms of "risk" and "non-risk". The model was validated through a 'ground truth study' following standard procedure using GIS tools for sampling and Immuno-chromotographic Test (ICT) for screening the individuals.

Results

A map for filariasis transmission was created and stratified into different spatial entities, "risk' and "non-risk", depending on Filariasis Transmission Risk Index (FTRI). The model estimation corroborated well with the ground (observed) data.

Conclusion

The geo-environmental risk model developed on GIS platform is useful for spatial delimitation purpose on a macro scale.  相似文献   

12.

Background

More than 60% of new strokes each year are "mild" in severity and this proportion is expected to rise in the years to come. Within our current health care system those with "mild" stroke are typically discharged home within days, without further referral to health or rehabilitation services other than advice to see their family physician. Those with mild stroke often have limited access to support from health professionals with stroke-specific knowledge who would typically provide critical information on topics such as secondary stroke prevention, community reintegration, medication counselling and problem solving with regard to specific concerns that arise. Isolation and lack of knowledge may lead to a worsening of health problems including stroke recurrence and unnecessary and costly health care utilization. The purpose of this study is to assess the effectiveness, for individuals who experience a first "mild" stroke, of a sustainable, low cost, multimodal support intervention (comprising information, education and telephone support) - "WE CALL" compared to a passive intervention (providing the name and phone number of a resource person available if they feel the need to) - "YOU CALL", on two primary outcomes: unplanned-use of health services for negative events and quality of life.

Method/Design

We will recruit 384 adults who meet inclusion criteria for a first mild stroke across six Canadian sites. Baseline measures will be taken within the first month after stroke onset. Participants will be stratified according to comorbidity level and randomised to one of two groups: YOU CALL or WE CALL. Both interventions will be offered over a six months period. Primary outcomes include unplanned use of heath services for negative event (frequency calendar) and quality of life (EQ-5D and Quality of Life Index). Secondary outcomes include participation level (LIFE-H), depression (Beck Depression Inventory II) and use of health services for health promotion or prevention (frequency calendar). Blind assessors will gather data at mid-intervention, end of intervention and one year follow up.

Discussion

If effective, this multimodal intervention could be delivered in both urban and rural environments. For example, existing infrastructure such as regional stroke centers and existing secondary stroke prevention clinics, make this intervention, if effective, deliverable and sustainable.

Trial Registration

ISRCTN95662526  相似文献   

13.

Background

Genes and gene products are frequently annotated with Gene Ontology concepts based on the evidence provided in genomics articles. Manually locating and curating information about a genomic entity from the biomedical literature requires vast amounts of human effort. Hence, there is clearly a need forautomated computational tools to annotate the genes and gene products with Gene Ontology concepts by computationally capturing the related knowledge embedded in textual data.

Results

In this article, we present an automated genomic entity annotation system, GEANN, which extracts information about the characteristics of genes and gene products in article abstracts from PubMed, and translates the discoveredknowledge into Gene Ontology (GO) concepts, a widely-used standardized vocabulary of genomic traits. GEANN utilizes textual "extraction patterns", and a semantic matching framework to locate phrases matching to a pattern and produce Gene Ontology annotations for genes and gene products. In our experiments, GEANN has reached to the precision level of 78% at therecall level of 61%. On a select set of Gene Ontology concepts, GEANN either outperforms or is comparable to two other automated annotation studies. Use of WordNet for semantic pattern matching improves the precision and recall by 24% and 15%, respectively, and the improvement due to semantic pattern matching becomes more apparent as the Gene Ontology terms become more general.

Conclusion

GEANN is useful for two distinct purposes: (i) automating the annotation of genomic entities with Gene Ontology concepts, and (ii) providing existing annotations with additional "evidence articles" from the literature. The use of textual extraction patterns that are constructed based on the existing annotations achieve high precision. The semantic pattern matching framework provides a more flexible pattern matching scheme with respect to "exactmatching" with the advantage of locating approximate pattern occurrences with similar semantics. Relatively low recall performance of our pattern-based approach may be enhanced either by employing a probabilistic annotation framework based on the annotation neighbourhoods in textual data, or, alternatively, the statistical enrichment threshold may be adjusted to lower values for applications that put more value on achieving higher recall values.  相似文献   

14.

Background

Given the complex mechanisms underlying biochemical processes systems biology researchers tend to build ever increasing computational models. However, dealing with complex systems entails a variety of problems, e.g. difficult intuitive understanding, variety of time scales or non-identifiable parameters. Therefore, methods are needed that, at least semi-automatically, help to elucidate how the complexity of a model can be reduced such that important behavior is maintained and the predictive capacity of the model is increased. The results should be easily accessible and interpretable. In the best case such methods may also provide insight into fundamental biochemical mechanisms.

Results

We have developed a strategy based on the Computational Singular Perturbation (CSP) method which can be used to perform a "biochemically-driven" model reduction of even large and complex kinetic ODE systems. We provide an implementation of the original CSP algorithm in COPASI (a COmplex PAthway SImulator) and applied the strategy to two example models of different degree of complexity - a simple one-enzyme system and a full-scale model of yeast glycolysis.

Conclusion

The results show the usefulness of the method for model simplification purposes as well as for analyzing fundamental biochemical mechanisms. COPASI is freely available at http://www.copasi.org.  相似文献   

15.

Background

Spinal cord compression and associate neurological impairment is rare in patients with scoliosis and neurofibromatosis. Common reasons are vertebral subluxation, dislocation, angulation and tumorous lesions around the spinal canal. Only twelve cases of intraspinal rib dislocation have been reported in the literature. The aim of this report is to present a case of rib penetration through neural foramen at the apex of a scoliotic curve in neurofibromatosis and to introduce a new clinical sign for its detection.

Methods

A 13-year-old girl was evaluated for progressive left thoracic kyphoscoliotic curve due to a type I neurofibromatosis. Clinical examination revealed multiple large thoracic and abdominal "cafe-au-lait" spots, neurological impairment of the lower limbs and the presence of a thoracic gibbous that was painful to pressure at the level of the left eighth rib (Painful Rib Hump). CT-scan showed detachment and translocation of the cephalic end of the left eighth rib into the adjacent enlarged neural foramen. The M.R.I. examination of the spine showed neither cord abnormality nor neurogenic tumor.

Results

The patient underwent resection of the intraspinal mobile eighth rib head and posterior spinal instrumentation and was neurologically fully recovered six months postoperatively.

Conclusion

Spine surgeons should be aware of intraspinal rib displacement in scoliotic curves in neurofibromatosis. Painful rib hump is a valuable diagnostic tool for this rare clinical entity.  相似文献   

16.

Background

Transcranial Doppler Ultrasound (TCD) is a sensitive, real time tool for monitoring cerebral blood flow velocity (CBFV). This technique is fast, accurate, reproducible and noninvasive. In the setting of congenital heart surgery, TCD finds application in the evaluation of cerebral blood flow variations during cardiopulmonary bypass (CPB).

Methodology

We performed a search on human studies published on the MEDLINE using the keyword "trans cranial Doppler" crossed with "pediatric cardiac surgery" AND "cardio pulmonary by pass", OR deep hypothermic cardiac arrest", OR "neurological monitoring".

Discussion

Current scientific evidence suggests a good correlation between changes in cbral blood flow and mean cerebral artery (MCA) blood flow velocity. The introduction of Doppler technology has allowed an accurate monitorization of cerebral blood flow (CBF) during circulatory arrest and low-flow CPB. TCD has also been utilized in detecting cerebral emboli, improper cannulation or cross clamping of aortic arch vessels. Limitations of TCD routine utilization are represented by the need of a learning curve and some experience by the operators, as well as the need of implementing CBF informations with, for example, data on brain tissue oxygen delivery and consumption.

Conclusion

In this light, TCD plays an essential role in multimodal neurological monitorization during CPB (Near Infrared Spectroscopy, TCD, processed electro encephalography) that, according to recent studies, can help to significantly improve neurological outcome after cardiac surgery in neonates and pediatric patients.  相似文献   

17.

Background

Gene-list annotations are critical for researchers to explore the complex relationships between genes and functionalities. Currently, the annotations of a gene list are usually summarized by a table or a barplot. As such, potentially biologically important complexities such as one gene belonging to multiple annotation categories are difficult to extract. We have devised explicit and efficient visualization methods that provide intuitive methods for interrogating the intrinsic connections between biological categories and genes.

Findings

We have constructed a data model and now present two novel methods in a Bioconductor package, "GeneAnswers", to simultaneously visualize genes, concepts (a.k.a. annotation categories), and concept-gene connections (a.k.a. annotations): the "Concept-and-Gene Network" and the "Concept-and-Gene Cross Tabulation". These methods have been tested and validated with microarray-derived gene lists.

Conclusions

These new visualization methods can effectively present annotations using Gene Ontology, Disease Ontology, or any other user-defined gene annotations that have been pre-associated with an organism's genome by human curation, automated pipelines, or a combination of the two. The gene-annotation data model and associated methods are available in the Bioconductor package called "GeneAnswers " described in this publication.  相似文献   

18.

Introduction

Untargeted metabolomics workflows include numerous points where variance and systematic errors can be introduced. Due to the diversity of the lipidome, manual peak picking and quantitation using molecule specific internal standards is unrealistic, and therefore quality peak picking algorithms and further feature processing and normalization algorithms are important. Subsequent normalization, data filtering, statistical analysis, and biological interpretation are simplified when quality data acquisition and feature processing are employed.

Objectives

Metrics for QC are important throughout the workflow. The robust workflow presented here provides techniques to ensure that QC checks are implemented throughout sample preparation, data acquisition, pre-processing, and analysis.

Methods

The untargeted lipidomics workflow includes sample standardization prior to acquisition, blocks of QC standards and blanks run at systematic intervals between randomized blocks of experimental data, blank feature filtering (BFF) to remove features not originating from the sample, and QC analysis of data acquisition and processing.

Results

The workflow was successfully applied to mouse liver samples, which were investigated to discern lipidomic changes throughout the development of nonalcoholic fatty liver disease (NAFLD). The workflow, including a novel filtering method, BFF, allows improved confidence in results and conclusions for lipidomic applications.

Conclusion

Using a mouse model developed for the study of the transition of NAFLD from an early stage known as simple steatosis, to the later stage, nonalcoholic steatohepatitis, in combination with our novel workflow, we have identified phosphatidylcholines, phosphatidylethanolamines, and triacylglycerols that may contribute to disease onset and/or progression.
  相似文献   

19.

Background

During the cardiac cycle, the heart normally produces repeatable physiological sounds. However, under pathologic conditions, such as with heart valve stenosis or a ventricular septal defect, blood flow turbulence leads to the production of additional sounds, called murmurs. Murmurs are random in nature, while the underlying heart sounds are not (being deterministic).

Innovation

We show that a new analytical technique, which we call Digital Subtraction Phonocardiography (DSP), can be used to separate the random murmur component of the phonocardiogram from the underlying deterministic heart sounds.

Methods

We digitally recorded the phonocardiogram from the anterior chest wall in 60 infants and adults using a high-speed USB interface and the program Gold Wave http://www.goldwave.com. The recordings included individuals with cardiac structural disease as well as recordings from normal individuals and from individuals with innocent heart murmurs. Digital Subtraction Analysis of the signal was performed using a custom computer program called Murmurgram. In essence, this program subtracts the recorded sound from two adjacent cardiac cycles to produce a difference signal, herein called a "murmurgram". Other software used included Spectrogram (Version 16), GoldWave (Version 5.55) as well as custom MATLAB code.

Results

Our preliminary data is presented as a series of eight cases. These cases show how advanced signal processing techniques can be used to separate heart sounds from murmurs. Note that these results are preliminary in that normal ranges for obtained test results have not yet been established.

Conclusions

Cardiac murmurs can be separated from underlying deterministic heart sounds using DSP. DSP has the potential to become a reliable and economical new diagnostic approach to screening for structural heart disease. However, DSP must be further evaluated in a large series of patients with well-characterized pathology to determine its clinical potential.  相似文献   

20.

Background

Plasma levels of tumor necrosis factor (TNF)-α and of C-reactive protein (CRP) are elevated in smokers. Previous studies failed to show an association between the G-308A polymorphism in the promoter region of the TNF-α gene and coronary artery disease (CAD). We investigated whether smoking would interact with the TNF-α G-308A polymorphism in determining plasma levels of TNF-α and CRP.

Methods

Study participants with a complete data set in terms of smoking and the TNF-α G-308A polymorphism were 300 middle-aged male and female industrial employees. After excluding 24 irregular smokers, analyses were performed on 198 "non-smokers" (life-long non-smokers or subjects who quit smoking >6 months ago) as compared to 78 "regular smokers" (subjects currently smoking >3 cigarettes/day). All subjects had a fasting morning blood draw to measure plasma levels of TNF-α and CRP by high-sensitive enzyme-linked immunosorbent assays.

Results

The cardiovascular risk factor adjusted analysis regressing log-transformed CRP levels against smoking status, genotype, and smoking-status-genotype interaction revealed a significant main effect for smoking status (F1,250 = 5.67, p = .018) but not for genotype (F1,250 = 0.33, p = .57). The interaction-term between genotype and smoking status was not significant (F1,250 = 0.09, p = .76). The fully adjusted model with plasma TNF-α failed to show significant main effects for smoking and genotype, as well as for the smoking-status-genotype interaction.

Conclusions

The findings suggest that the TNF-α G-308A polymorphism does not mediate the effect of smoking on plasma CRP levels. It remains to be seen whether other genetic polymorphisms along the inflammatory pathway may modulate vascular risk in smokers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号