首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Biomedical research relies increasingly on large collections of data sets and knowledge whose generation, representation and analysis often require large collaborative and interdisciplinary efforts. This dimension of 'big data' research calls for the development of computational tools to manage such a vast amount of data, as well as tools that can improve communication and access to information from collaborating researchers and from the wider community. Whenever research projects have a defined temporal scope, an additional issue of data management arises, namely how the knowledge generated within the project can be made available beyond its boundaries and life-time. DC-THERA is a European 'Network of Excellence' (NoE) that spawned a very large collaborative and interdisciplinary research community, focusing on the development of novel immunotherapies derived from fundamental research in dendritic cell immunobiology. In this article we introduce the DC-THERA Directory, which is an information system designed to support knowledge management for this research community and beyond. We present how the use of metadata and Semantic Web technologies can effectively help to organize the knowledge generated by modern collaborative research, how these technologies can enable effective data management solutions during and beyond the project lifecycle, and how resources such as the DC-THERA Directory fit into the larger context of e-science.  相似文献   

2.
An increasing number of software tools support designers and other decision makers in making design, production, and purchasing decisions. Some of these tools provide quantitative information on environmental impacts such as climate change, human toxicity, or resource use during the life cycle of these products. Very little is known, however, about how these tools are actually used, what kind of modeling and presentation approaches users really want, or whether the information provided is likely to be used the way the developers intended. A survey of users of one such software tool revealed that although users want more transparency, about half also want an easy-to-use tool and would accept built-in assumptions; that most users prefer modeling of environmental impacts beyond the stressor level, and the largest group of respondents wants results simultaneously on the stressor, impact potential, and damage level; and that although many users look for aggregated information on impacts and costs, a majority do not trust that such an aggregation is valid or believe that there are tradeoffs among impacts. Further, our results show that the temporal and spatial scales of single impact categories explain only about 6% of the variation in the weights between impact categories set by respondents if the weights are set first. If the weights are set after respondents specify temporal and spatial scales, however, these scales explain about 24% of the variation. These results not only help method and tool developers to reconsider some previous assumptions, but also suggest a number of research questions that may need to be addressed in a more focused investigation.  相似文献   

3.
Integrative modeling computes a model based on varied types of input information, be it from experiments or prior models. Often, a type of input information will be best handled by a specific modeling software package. In such a case, we desire to integrate our integrative modeling software package, Integrative Modeling Platform (IMP), with software specialized to the computational demands of the modeling problem at hand. After several attempts, however, we have concluded that even in collaboration with the software’s developers, integration is either impractical or impossible. The reasons for the intractability of integration include software incompatibilities, differing modeling logic, the costs of collaboration, and academic incentives. In the integrative modeling software ecosystem, several large modeling packages exist with often redundant tools. We reason, therefore, that the other development groups have similarly concluded that the benefit of integration does not justify the cost. As a result, modelers are often restricted to the set of tools within a single software package. The inability to integrate tools from distinct software negatively impacts the quality of the models and the efficiency of the modeling. As the complexity of modeling problems grows, we seek to galvanize developers and modelers to consider the long-term benefit that software interoperability yields. In this article, we formulate a demonstrative set of software standards for implementing a model search using tools from independent software packages and discuss our efforts to integrate IMP and the crystallography suite Phenix within the Bayesian modeling framework.  相似文献   

4.
Fortney K  Jurisica I 《Human genetics》2011,130(4):465-481
Over the past two decades, high-throughput (HTP) technologies such as microarrays and mass spectrometry have fundamentally changed clinical cancer research. They have revealed novel molecular markers of cancer subtypes, metastasis, and drug sensitivity and resistance. Some have been translated into the clinic as tools for early disease diagnosis, prognosis, and individualized treatment and response monitoring. Despite these successes, many challenges remain: HTP platforms are often noisy and suffer from false positives and false negatives; optimal analysis and successful validation require complex workflows; and great volumes of data are accumulating at a rapid pace. Here we discuss these challenges, and show how integrative computational biology can help diminish them by creating new software tools, analytical methods, and data standards.  相似文献   

5.
Lymphatic filariasis (LF)-related disability affects 40 million people globally, making LF the leading cause of physical disability in the world. Despite this, there is limited research into how the impacts of LF-related disability are best measured. This article identifies the tools currently being used to measure LF-related disability and reviews their applicability against the known impacts of LF. The findings from the review show that the generic disability tools currently used by LF programs fail to measure the majority of known impacts of LF-related disability. The findings from the review support the development of an LF-specific disability measurement tool and raise doubt about the suitability of generic disability tools to assess disability related to neglected tropical diseases (NTDs) globally.  相似文献   

6.
Neural correlations, population coding and computation   总被引:1,自引:0,他引:1  
How the brain encodes information in population activity, and how it combines and manipulates that activity as it carries out computations, are questions that lie at the heart of systems neuroscience. During the past decade, with the advent of multi-electrode recording and improved theoretical models, these questions have begun to yield answers. However, a complete understanding of neuronal variability, and, in particular, how it affects population codes, is missing. This is because variability in the brain is typically correlated, and although the exact effects of these correlations are not known, it is known that they can be large. Here, we review studies that address the interaction between neuronal noise and population codes, and discuss their implications for population coding in general.  相似文献   

7.
Functional, usable, and maintainable open-source software is increasingly essential to scientific research, but there is a large variation in formal training for software development and maintainability. Here, we propose 10 “rules” centered on 2 best practice components: clean code and testing. These 2 areas are relatively straightforward and provide substantial utility relative to the learning investment. Adopting clean code practices helps to standardize and organize software code in order to enhance readability and reduce cognitive load for both the initial developer and subsequent contributors; this allows developers to concentrate on core functionality and reduce errors. Clean coding styles make software code more amenable to testing, including unit tests that work best with modular and consistent software code. Unit tests interrogate specific and isolated coding behavior to reduce coding errors and ensure intended functionality, especially as code increases in complexity; unit tests also implicitly provide example usages of code. Other forms of testing are geared to discover erroneous behavior arising from unexpected inputs or emerging from the interaction of complex codebases. Although conforming to coding styles and designing tests can add time to the software development project in the short term, these foundational tools can help to improve the correctness, quality, usability, and maintainability of open-source scientific software code. They also advance the principal point of scientific research: producing accurate results in a reproducible way. In addition to suggesting several tips for getting started with clean code and testing practices, we recommend numerous tools for the popular open-source scientific software languages Python, R, and Julia.  相似文献   

8.
The availability of sequenced genomes of human and many experimental animals necessitated the development of new technologies and powerful computational tools that are capable of exploiting these genomic data and ask intriguing questions about complex nature of biological processes. This gave impetus for developing whole genome approaches that can produce functional information of genes in the form of expression profiles and unscramble the relationships between variation in gene expression and the resulting physiological outcome. These profiles represent genetic fingerprints or catalogue of genes that characterize the cell or tissue being studied and provide a basis from which to begin an investigation of the underlying biology. Among the most powerful and versatile tools are high-density DNA microarrays to analyze the expression patterns of large numbers of genes across different tissues or within the same tissue under a variety of experimental conditions or even between species. The wide spread use of microarray technologies is generating large sets of data that is stimulating the development of better analytical tools so that functions can be predicted for novel genes. In this review, the authors discuss how these profiles are being used at various stages of the drug discovery process and help in the identification of new drug targets, predict the function of novel genes, and understand individual variability in response to drugs.  相似文献   

9.
10.
The use of rigorous ethological observation via machine learning techniques to understand brain function (computational neuroethology) is a rapidly growing approach that is poised to significantly change how behavioral neuroscience is commonly performed. With the development of open-source platforms for automated tracking and behavioral recognition, these approaches are now accessible to a wide array of neuroscientists despite variations in budget and computational experience. Importantly, this adoption has moved the field toward a common understanding of behavior and brain function through the removal of manual bias and the identification of previously unknown behavioral repertoires. Although less apparent, another consequence of this movement is the introduction of analytical tools that increase the explainabilty, transparency, and universality of the machine-based behavioral classifications both within and between research groups. Here, we focus on three main applications of such machine model explainabilty tools and metrics in the drive toward behavioral (i) standardization, (ii) specialization, and (iii) explainability. We provide a perspective on the use of explainability tools in computational neuroethology, and detail why this is a necessary next step in the expansion of the field. Specifically, as a possible solution in behavioral neuroscience, we propose the use of Shapley values via Shapley Additive Explanations (SHAP) as a diagnostic resource toward explainability of human annotation, as well as supervised and unsupervised behavioral machine learning analysis.  相似文献   

11.
Proteomics research infrastructures and core facilities within the Core for Life alliance advocate for community policies for quality control to ensure high standards in proteomics services.

Core facilities and research infrastructures have become an essential part of the scientific ecosystem. In the field of proteomics, national and international networks and research platforms have been established during the past decade that are supposed to set standards for high‐quality services, promote an exchange of professional information, and enable access to cutting‐edge, specialized proteomics technologies. Either centralized or distributed, these national and international proteomics infrastructures and technology platforms are generating massive amounts of data for the research community, and support a broad range of translational, computational and multi‐omics initiatives and basic research projects.By delegating part of their work to these services, researchers expect that the core facility adjusts their analytical protocols appropriately for their project to acquire data conforming best research practice of the scientific community. The implementation of quality assessment measures and commonly accepted quality controls in data generation is therefore crucially important for proteomics research infrastructures and the scientists who rely on them.However, current quality control and quality assessment procedures in proteomics core facilities and research infrastructures are a motley collection of protocols, standards, reference compounds and software tools. Proteomics relies on a customized multi‐step workflow typically consisting of sample preparation, data acquisition and data processing, and the implementation of each step differs among facilities. For example, sample preparation involves enzymatic digestion of the proteins, which can be performed in‐solution, in‐gel, or on‐beads, with often different proteolytic enzymes, chemicals, and conditions among laboratories. Data acquisition protocols are often customized to the particular instrument set up, and the acquired spectra and chromatograms are processed by different software tools provided by equipment vendors, third parties or developed in‐house.
…current quality control and quality assessment procedures in proteomics core facilities and research infrastructures are a motley collection of protocols, standards, reference compounds and software tools.
Moreover, core facilities implement their own guidelines to monitor the performance and quality of the entire workflow, typically utilizing different commercially available standards such as pre‐digested cell lysates, recombinant proteins, protein mixtures, or isotopically labeled peptides. Currently, there is no clear consensus on if, when and how to perform quality control checks. There is even less quality control in walk‐in facilities, where the staff is only responsible for correct usage of the instruments and users select and execute the analytical workflow themselves. It is not surprising therefore that instrument stability and robustness of the applied analytical approach are often unclear, which compromises analytical rigor.  相似文献   

12.
Functional analysis of large gene lists, derived in most cases from emerging high-throughput genomic, proteomic and bioinformatics scanning approaches, is still a challenging and daunting task. The gene-annotation enrichment analysis is a promising high-throughput strategy that increases the likelihood for investigators to identify biological processes most pertinent to their study. Approximately 68 bioinformatics enrichment tools that are currently available in the community are collected in this survey. Tools are uniquely categorized into three major classes, according to their underlying enrichment algorithms. The comprehensive collections, unique tool classifications and associated questions/issues will provide a more comprehensive and up-to-date view regarding the advantages, pitfalls and recent trends in a simpler tool-class level rather than by a tool-by-tool approach. Thus, the survey will help tool designers/developers and experienced end users understand the underlying algorithms and pertinent details of particular tool categories/tools, enabling them to make the best choices for their particular research interests.  相似文献   

13.
14.
Mass spectrometry-based proteomics has evolved as a high-throughput research field over the past decade. Significant advances in instrumentation, and the ability to produce huge volumes of data, have emphasized the need for adequate data analysis tools, which are nowadays often considered the main bottleneck for proteomics development. This review highlights important issues that directly impact the effectiveness of proteomic quantitation and educates software developers and end-users on available computational solutions to correct for the occurrence of these factors. Potential sources of errors specific for stable isotope-based methods or label-free approaches are explicitly outlined. The overall aim focuses on a generic proteomic workflow.  相似文献   

15.
Metabolic network analysis has attracted much attention in the area of systems biology. It has a profound role in understanding the key features of organism metabolic networks and has been successfully applied in several fields of systems biology, including in silico gene knockouts, production yield improvement using engineered microbial strains, drug target identification, and phenotype prediction. A variety of metabolic network databases and tools have been developed in order to assist research in these fields. Databases that comprise biochemical data are normally integrated with the use of metabolic network analysis tools in order to give a more comprehensive result. This paper reviews and compares eight databases as well as twenty one recent tools. The aim of this review is to study the different types of tools in terms of the features and usability, as well as the databases in terms of the scope and data provided. These tools can be categorised into three main types: standalone tools; toolbox-based tools; and web-based tools. Furthermore, comparisons of the databases as well as the tools are also provided to help software developers and users gain a clearer insight and a better understanding of metabolic network analysis. Additionally, this review also helps to provide useful information that can be used as guidance in choosing tools and databases for a particular research interest.  相似文献   

16.
Despite similar computational approaches, there is surprisingly little interaction between the computational neuroscience and the systems biology research communities. In this review I reconstruct the history of the two disciplines and show that this may explain why they grew up apart. The separation is a pity, as both fields can learn quite a bit from each other. Several examples are given, covering sociological, software technical, and methodological aspects. Systems biology is a better organized community which is very effective at sharing resources, while computational neuroscience has more experience in multiscale modeling and the analysis of information processing by biological systems. Finally, I speculate about how the relationship between the two fields may evolve in the near future.  相似文献   

17.
Despite the desire to delve deeper into hallucinations of all types, methodological obstacles have frustrated development of more rigorous quantitative experimental techniques, thereby hampering research progress. Here, we discuss these obstacles and, with reference to visual phenomena, argue that experimentally induced phenomena (e.g. hallucinations induced by flickering light and classical conditioning) can bring hallucinations within reach of more objective behavioural and neural measurement. Expanding the scope of hallucination research raises questions about which phenomena qualify as hallucinations, and how to identify phenomena suitable for use as laboratory models of hallucination. Due to the ambiguity inherent in current hallucination definitions, we suggest that the utility of phenomena for use as laboratory hallucination models should be represented on a continuous spectrum, where suitability varies with the degree to which external sensory information constrains conscious experience. We suggest that existing strategies that group pathological hallucinations into meaningful subtypes based on hallucination characteristics (including phenomenology, disorder and neural activity) can guide extrapolation from hallucination models to other hallucinatory phenomena. Using a spectrum of phenomena to guide scientific hallucination research should help unite the historically separate fields of psychophysics, cognitive neuroscience and clinical research to better understand and treat hallucinations, and inform models of consciousness.This article is part of the theme issue ‘Offline perception: voluntary and spontaneous perceptual experiences without matching external stimulation’.  相似文献   

18.

Background

Modifying the format and content of guidelines may facilitate their use and lead to improved quality of care. We reviewed the medical literature to identify features desired by different users and associated with guideline use to develop a framework of implementability and found that most guidelines do not contain these elements. Further research is needed to develop and evaluate implementability tools.

Methods

We are launching the Guideline Implementability Research and Application Network (GIRAnet) to enable the development and testing of implementability tools in three domains: Resource Implications, Implementation, and Evaluation. Partners include the Guidelines International Network (G-I-N) and its member guideline developers, implementers, and researchers. In phase one, international guidelines will be examined to identify and describe exemplar tools. Indication-specific and generic tools will populate a searchable repository. In phase two, qualitative analysis of cognitive interviews will be used to understand how developers can best integrate implementability tools in guidelines and how health professionals use them for interpreting and applying guidelines. In phase three, a small-scale pilot test will assess the impact of implementability tools based on quantitative analysis of chart-based behavioural outcomes and qualitative analysis of interviews with participants. The findings will be used to plan a more comprehensive future evaluation of implementability tools.

Discussion

Infrastructure funding to establish GIRAnet will be leveraged with the in-kind contributions of collaborating national and international guideline developers to advance our knowledge of implementation practice and science. Needs assessment and evaluation of GIRAnet will provide a greater understanding of how to develop and sustain such knowledge-exchange networks. Ultimately, by facilitating use of guidelines, this research may lead to improved delivery and outcomes of patient care.  相似文献   

19.
Proteomic databases and software on the web   总被引:1,自引:0,他引:1  
In the wake of sequencing projects, protein function analysis is evolving fast, from the careful design of assays that address specific questions to 'large-scale' proteomics technologies that yield proteome-wide maps of protein expression or interaction.As these new technologies depend heavily on information storage, representation and analysis, existing databases and software tools are being adapted, while new resources are emerging.This paper describes the proteomics databases and software available through the World-Wide Web, focusing on their present use and applicability.As the resource situation is highly transitory, trends and probable evolutions are discussed whenever applicable.  相似文献   

20.
The United States is rapidly expanding production of renewable energy to meet increased energy demands and reduce greenhouse gas emissions. Wind energy is at the forefront of this transition. A central challenge is understanding the nexus between wind energy development and its capacity for negative effects on wildlife causing population declines and habitat loss. Collaboration among conservationists and developers, early in the planning process, is crucial for minimizing wind-wildlife conflicts. Such collaborations require data showing where wind and wildlife impacts occur. To meet this challenge and inform decision-making, we provide natural resource agencies and stakeholders information regarding where future wind turbines may occur, and the potential affects on natural resource management, including the conservation of priority species and their habitats. We developed a machine learning model predicting suitability of wind turbine occurrence (hereafter, wind turbine suitability) across an eight-state region in the United States, representing some of the richest areas of wind potential. Our model incorporates predictor variables related to infrastructure, land ownership, meteorology, and topography. We additionally created a constraint layer indicating areas where wind would likely not be developed because of zoning, protected lands, and restricted federal agency proximity guidelines. We demonstrate how the predictive wind turbine suitability model informs conservation planning by incorporating animal movement models, relative abundance models coupled with spatial conservation planning software, and population density models for three exemplar, high priority species often affected by wind energy: whooping cranes (Grus americana), golden eagles (Aquila chrysaetos), and lesser prairie-chickens (Tympanuchus pallidicinctus). By merging the wind turbine and biological models, we identified conservation priority areas (i.e., places sharing high suitability for wind turbines and species use), and places where wind expansion could minimally affect these species. We use our “species-wind turbine occurrence relationships” to demonstrate applications, illustrating how forecasting areas of wind turbine suitability promotes wildlife conservation. These relationships inform wind energy siting to reduce negative ecological impacts while promoting environmental and economic viability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号