首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed via the design and use of knowledge-based systems.

What to Learn in This Chapter

  • Understand basic knowledge types and structures that can be applied to biomedical and translational science;
  • Gain familiarity with the knowledge engineering cycle, tools and methods that may be used throughout that cycle, and the resulting classes of knowledge products generated via such processes;
  • An understanding of the basic methods and techniques that can be used to employ knowledge products in order to integrate and reason upon heterogeneous and multi-dimensional data sets; and
  • Become conversant in the open research questions/areas related to the ability to develop and apply knowledge collections in the translational bioinformatics domain.
This article is part of the “Translational Bioinformatics” collection for PLOS Computational Biology.
  相似文献   

2.

Background  

Randomized, prospective trials involving multi-institutional collaboration have become a central part of clinical and translational research. However, data management and coordination of multi-center studies is a complex process that involves developing systems for data collection and quality control, tracking data queries and resolutions, as well as developing communication procedures. We describe DADOS-Prospective, an open-source Web-based application for collecting and managing prospective data on human subjects for clinical and translational trials. DADOS-Prospective not only permits users to create new clinical research forms (CRF) and supports electronic signatures, but also offers the advantage of containing, in a single environment, raw research data in downloadable spreadsheet format, source documentation and regulatory files stored in PDF format, and audit trails.  相似文献   

3.

Background  

Electrochemotherapy is an effective approach in local tumour treatment employing locally applied high-voltage electric pulses in combination with chemotherapeutic drugs. In planning and performing electrochemotherapy a multidisciplinary expertise is required and collaboration, knowledge and experience exchange among the experts from different scientific fields such as medicine, biology and biomedical engineering is needed. The objective of this study was to develop an e-learning application in order to provide the educational content on electrochemotherapy and its underlying principles and to support collaboration, knowledge and experience exchange among the experts involved in the research and clinics.  相似文献   

4.

Background  

One of the greatest challenges facing biomedical research is the integration and sharing of vast amounts of information, not only for individual researchers, but also for the community at large. Agent Based Modeling (ABM) can provide a means of addressing this challenge via a unifying translational architecture for dynamic knowledge representation. This paper presents a series of linked ABMs representing multiple levels of biological organization. They are intended to translate the knowledge derived from in vitro models of acute inflammation to clinically relevant phenomenon such as multiple organ failure.  相似文献   

5.
MOTIVATION: Primary immunodeficiency diseases (PIDs) are Mendelian conditions of high phenotypic complexity and low incidence. They usually manifest in toddlers and infants, although they can also occur much later in life. Information about PIDs is often widely scattered throughout the clinical as well as the research literature and hard to find for both generalists as well as experienced clinicians. Semantic Web technologies coupled to clinical information systems can go some way toward addressing this problem. Ontologies are a central component of such a system, containing and centralizing knowledge about primary immunodeficiencies in both a human- and computer-comprehensible form. The development of an ontology of PIDs is therefore a central step toward developing informatics tools, which can support the clinician in the diagnosis and treatment of these diseases. RESULTS: We present PIDO, the primary immunodeficiency disease ontology. PIDO characterizes PIDs in terms of the phenotypes commonly observed by clinicians during a diagnosis process. Phenotype terms in PIDO are formally defined using complex definitions based on qualities, functions, processes and structures. We provide mappings to biomedical reference ontologies to ensure interoperability with ontologies in other domains. Based on PIDO, we developed the PIDFinder, an ontology-driven software prototype that can facilitate clinical decision support. PIDO connects immunological knowledge across resources within a common framework and thereby enables translational research and the development of medical applications for the domain of immunology and primary immunodeficiency diseases.  相似文献   

6.
The promise of science lies in expectations of its benefits to societies and is matched by expectations of the realisation of the significant public investment in that science. In this paper, we undertake a methodological analysis of the science of biobanking and a sociological analysis of translational research in relation to biobanking. Part of global and local endeavours to translate raw biomedical evidence into practice, biobanks aim to provide a platform for generating new scientific knowledge to inform development of new policies, systems and interventions to enhance the public’s health. Effectively translating scientific knowledge into routine practice, however, involves more than good science. Although biobanks undoubtedly provide a fundamental resource for both clinical and public health practice, their potentiating ontology—that their outputs are perpetually a promise of scientific knowledge generation—renders translation rather less straightforward than drug discovery and treatment implementation. Biobanking science, therefore, provides a perfect counterpoint against which to test the bounds of translational research. We argue that translational research is a contextual and cumulative process: one that is necessarily dynamic and interactive and involves multiple actors. We propose a new multidimensional model of translational research which enables us to imagine a new paradigm: one that takes us from bench to bedside to backyard and beyond, that is, attentive to the social and political context of translational science, and is cognisant of all the players in that process be they researchers, health professionals, policy makers, industry representatives, members of the public or research participants, amongst others.  相似文献   

7.
Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research—translating basic science results into new interventions—and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.

What to Learn in This Chapter

Text mining is an established field, but its application to translational bioinformatics is quite new and it presents myriad research opportunities. It is made difficult by the fact that natural (human) language, unlike computer language, is characterized at all levels by rampant ambiguity and variability. Important sub-tasks include gene name recognition, or finding mentions of gene names in text; gene normalization, or mapping mentions of genes in text to standard database identifiers; phenotype recognition, or finding mentions of phenotypes in text; and phenotype normalization, or mapping mentions of phenotypes to concepts in ontologies. Text mining for translational bioinformatics can necessitate dealing with two widely varying genres of text—published journal articles, and prose fields in electronic medical records. Research into the latter has been impeded for years by lack of public availability of data sets, but this has very recently changed and the field is poised for rapid advances. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.
This article is part of the “Translational Bioinformatics” collection for PLOS Computational Biology.
  相似文献   

8.
Biobanks are actively contributing to advances in biomedical research by offering opportunities to link laboratory research with clinical applications and by accelerating developments in personalized medicine. Microbiologists have a long tradition of storing microorganisms as part of projects focused on microbial genetics or phenotypic investigations. However, the impressive recent advances of biomedical translational research demand the integration of biobanks with high-level technological infrastructures in genomics, proteomics, bioinformatics, patient information systems and disease registries, where data originating from microorganisms are linked with human clinical information with the ultimate aim of improving healthcare by increasing the quality of biomedical research.  相似文献   

9.

Background  

The increasing number of gene expression microarray studies represents an important resource in biomedical research. As a result, gene expression based diagnosis has entered clinical practice for patient stratification in breast cancer. However, the integration and combined analysis of microarray studies remains still a challenge. We assessed the potential benefit of data integration on the classification accuracy and systematically evaluated the generalization performance of selected methods on four breast cancer studies comprising almost 1000 independent samples. To this end, we introduced an evaluation framework which aims to establish good statistical practice and a graphical way to monitor differences. The classification goal was to correctly predict estrogen receptor status (negative/positive) and histological grade (low/high) of each tumor sample in an independent study which was not used for the training. For the classification we chose support vector machines (SVM), predictive analysis of microarrays (PAM), random forest (RF) and k-top scoring pairs (kTSP). Guided by considerations relevant for classification across studies we developed a generalization of kTSP which we evaluated in addition. Our derived version (DV) aims to improve the robustness of the intrinsic invariance of kTSP with respect to technologies and preprocessing.  相似文献   

10.
11.

Background  

An increase in work on the full text of journal articles and the growth of PubMedCentral have the opportunity to create a major paradigm shift in how biomedical text mining is done. However, until now there has been no comprehensive characterization of how the bodies of full text journal articles differ from the abstracts that until now have been the subject of most biomedical text mining research.  相似文献   

12.

Background  

Agile is an iterative approach to software development that relies on strong collaboration and automation to keep pace with dynamic environments. We have successfully used agile development approaches to create and maintain biomedical software, including software for bioinformatics. This paper reports on a qualitative study of our experiences using these methods.  相似文献   

13.

Background

A better theoretical base for understanding professional behaviour change is needed to support evidence-based changes in medical practice. Traditionally strategies to encourage changes in clinical practices have been guided empirically, without explicit consideration of underlying theoretical rationales for such strategies. This paper considers a theoretical framework for reasoning from within psychology for identifying individual differences in cognitive processing between doctors that could moderate the decision to incorporate new evidence into their clinical decision-making.

Discussion

Parallel dual processing models of reasoning posit two cognitive modes of information processing that are in constant operation as humans reason. One mode has been described as experiential, fast and heuristic; the other as rational, conscious and rule based. Within such models, the uptake of new research evidence can be represented by the latter mode; it is reflective, explicit and intentional. On the other hand, well practiced clinical judgments can be positioned in the experiential mode, being automatic, reflexive and swift. Research suggests that individual differences between people in both cognitive capacity (e.g., intelligence) and cognitive processing (e.g., thinking styles) influence how both reasoning modes interact. This being so, it is proposed that these same differences between doctors may moderate the uptake of new research evidence. Such dispositional characteristics have largely been ignored in research investigating effective strategies in implementing research evidence. Whilst medical decision-making occurs in a complex social environment with multiple influences and decision makers, it remains true that an individual doctor's judgment still retains a key position in terms of diagnostic and treatment decisions for individual patients. This paper argues therefore, that individual differences between doctors in terms of reasoning are important considerations in any discussion relating to changing clinical practice.

Summary

It is imperative that change strategies in healthcare consider relevant theoretical frameworks from other disciplines such as psychology. Generic dual processing models of reasoning are proposed as potentially useful in identifying factors within doctors that may moderate their individual uptake of evidence into clinical decision-making. Such factors can then inform strategies to change practice.  相似文献   

14.

Background  

Lately, there has been a great interest in the application of information extraction methods to the biomedical domain, in particular, to the extraction of relationships of genes, proteins, and RNA from scientific publications. The development and evaluation of such methods requires annotated domain corpora.  相似文献   

15.
In the post-genomic era, the rapid evolution of high-throughput genotyping technologies and the increased pace of production of genetic research data are continually prompting the development of appropriate informatics tools, systems and databases as we attempt to cope with the flood of incoming genetic information. Alongside new technologies that serve to enhance data connectivity, emerging information systems should contribute to the creation of a powerful knowledge environment for genotype-to-phenotype information in the context of translational medicine. In the area of pharmacogenomics and personalized medicine, it has become evident that database applications providing important information on the occurrence and consequences of gene variants involved in pharmacokinetics, pharmacodynamics, drug efficacy and drug toxicity will become an integral tool for researchers and medical practitioners alike. At the same time, two fundamental issues are inextricably linked to current developments, namely data sharing and data protection. Here, we discuss high-throughput and next-generation sequencing technology and its impact on pharmacogenomics research. In addition, we present advances and challenges in the field of pharmacogenomics information systems which have in turn triggered the development of an integrated electronic ‘pharmacogenomics assistant’. The system is designed to provide personalized drug recommendations based on linked genotype-to-phenotype pharmacogenomics data, as well as to support biomedical researchers in the identification of pharmacogenomics-related gene variants. The provisioned services are tuned in the framework of a single-access pharmacogenomics portal.  相似文献   

16.

Background  

Gene named entity classification and recognition are crucial preliminary steps of text mining in biomedical literature. Machine learning based methods have been used in this area with great success. In most state-of-the-art systems, elaborately designed lexical features, such as words, n-grams, and morphology patterns, have played a central part. However, this type of feature tends to cause extreme sparseness in feature space. As a result, out-of-vocabulary (OOV) terms in the training data are not modeled well due to lack of information.  相似文献   

17.
18.

Background  

Bioinformatics often leverages on recent advancements in computer science to support biologists in their scientific discovery process. Such efforts include the development of easy-to-use web interfaces to biomedical databases. Recent advancements in interactive web technologies require us to rethink the standard submit-and-wait paradigm, and craft bioinformatics web applications that share analytical and interactive power with their desktop relatives, while retaining simplicity and availability.  相似文献   

19.

Background

Next generation sequencing (NGS) methods have significantly contributed to a paradigm shift in genomic research for nearly a decade now. These methods have been useful in studying the dynamic interactions between RNA viruses and human hosts.

Scope of the review

In this review, we summarise and discuss key applications of NGS in studying the host – pathogen interactions in RNA viral infections of humans with examples.

Major conclusions

Use of NGS to study globally relevant RNA viral infections have revolutionized our understanding of the within host and between host evolution of these viruses. These methods have also been useful in clinical decision-making and in guiding biomedical research on vaccine design.

General significance

NGS has been instrumental in viral genomic studies in resolving within-host viral genomic variants and the distribution of nucleotide polymorphisms along the full-length of viral genomes in a high throughput, cost effective manner. In the future, novel advances such as long read, single molecule sequencing of viral genomes and simultaneous sequencing of host and pathogens may become the standard of practice in research and clinical settings. This will also bring on new challenges in big data analysis.  相似文献   

20.

Background  

While biomedical text mining is emerging as an important research area, practical results have proven difficult to achieve. We believe that an important first step towards more accurate text-mining lies in the ability to identify and characterize text that satisfies various types of information needs. We report here the results of our inquiry into properties of scientific text that have sufficient generality to transcend the confines of a narrow subject area, while supporting practical mining of text for factual information. Our ultimate goal is to annotate a significant corpus of biomedical text and train machine learning methods to automatically categorize such text along certain dimensions that we have defined.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号