首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Remembering the past to imagine the future: the prospective brain   总被引:2,自引:0,他引:2  
A rapidly growing number of recent studies show that imagining the future depends on much of the same neural machinery that is needed for remembering the past. These findings have led to the concept of the prospective brain; an idea that a crucial function of the brain is to use stored information to imagine, simulate and predict possible future events. We suggest that processes such as memory can be productively re-conceptualized in light of this idea.  相似文献   

2.
OPP: This paper provides the rationale and support for the decisions the OPP will make in requiring and reviewing mutagenicity information. The regulatory requirement for mutagenicity testing to support a pesticide registration is found in the 40 CFR Part 158. The guidance as to the specific mutagenicity testing to be performed is found in the OPP's Pesticide Assessment Guidelines, Subdivision F, Hazard Evaluation: Human and Domestic Animals (referred to as the Subdivision F guideline). A revised Subdivision F guideline has been presented that becomes the current guidance for submitters of mutagenicity data to the OPP. The decision to revise the guideline was the result of close examination of the version published in 1982 and the desire to update the guidance based on developments since then and current state-of-the-science. After undergoing Agency and public scrutiny, the revised guideline is to be published in 1991. The revised guideline consists of an initial battery of tests (the Salmonella assay, an in vitro mammalian gene mutation assay and an in vivo cytogenetics assay which may be either a bone marrow assay for chromosomal aberrations or for micronuclei formation) that should provide an adequate initial assessment of the potential mutagenicity of a chemical. Follow-up testing to clarify results from the initial testing may be necessary. After this information as well as all other relevant information is obtained, a weight-of-evidence decision will be made about the possible mutagenicity concern a chemical may present. Testing to pursue qualitative and/or quantitative evidence for assessing heritable risk in relation to human beings will then be considered if a mutagenicity concern exists. This testing may range from tests for evidence of gonadal exposure to dominant lethal testing to quantitative tests such as the specific locus and heritable translocation assays. The mutagenicity assessment will be performed in accordance with the Agency's Mutagenicity Risk Assessment Guidelines. The mutagenicity data would also be used in the weight-of-evidence consideration for the potential carcinogenicity of a chemical in accordance with the Agency's Carcinogen Risk Assessment Guidelines. In instances where there are triggers for carcinogenicity testing, mutagenicity data may be used as one of the triggers after a consideration of available information. It is felt that the revised Subdivision F guideline will provide appropriate, and more specific, guidance concerning the OPP approach to mutagenicity testing for the registration of a pesticide. It also provides a clearer understanding of how the OPP will proceed with its evaluation and decision making concerning the potential heritable effects of a test chemical.(ABSTRACT TRUNCATED AT 400 WORDS)  相似文献   

3.
4.
The Feeding Experiments End-user Database (FEED) is a research tool developed by the Mammalian Feeding Working Group at the National Evolutionary Synthesis Center that permits synthetic, evolutionary analyses of the physiology of mammalian feeding. The tasks of the Working Group are to compile physiologic data sets into a uniform digital format stored at a central source, develop a standardized terminology for describing and organizing the data, and carry out a set of novel analyses using FEED. FEED contains raw physiologic data linked to extensive metadata. It serves as an archive for a large number of existing data sets and a repository for future data sets. The metadata are stored as text and images that describe experimental protocols, research subjects, and anatomical information. The metadata incorporate controlled vocabularies to allow consistent use of the terms used to describe and organize the physiologic data. The planned analyses address long-standing questions concerning the phylogenetic distribution of phenotypes involving muscle anatomy and feeding physiology among mammals, the presence and nature of motor pattern conservation in the mammalian feeding muscles, and the extent to which suckling constrains the evolution of feeding behavior in adult mammals. We expect FEED to be a growing digital archive that will facilitate new research into understanding the evolution of feeding anatomy.  相似文献   

5.
6.
Advances in DNA sequencing technologies have made it possible to rapidly, accurately and affordably sequence entire individual human genomes. As impressive as this ability seems, however, it will not likely amount to much if one cannot extract meaningful information from individual sequence data. Annotating variations within individual genomes and providing information about their biological or phenotypic impact will thus be crucially important in moving individual sequencing projects forward, especially in the context of the clinical use of sequence information. In this paper we consider the various ways in which one might annotate individual sequence variations and point out limitations in the available methods for doing so. It is arguable that, in the foreseeable future, DNA sequencing of individual genomes will become routine for clinical, research, forensic, and personal purposes. We therefore also consider directions and areas for further research in annotating genomic variants.  相似文献   

7.
The importance of glycosylation in biological events and the role it plays in glycoprotein function and structure is an area in which there is growing interest. In order to understand how glycosylation affects the shape or function of a protein it is however important to have suitable techniques available to obtain structural information on the oligosaccharides attached to the protein. For many years the complexity of the structures required sophisticated analytical techniques only available to a few specialist laboratories. In many cases these techniques were not available or required a large amount of material and therefore the number of glycoproteins which were fully characterised were relatively few. In recent years there have been substantial developments in the analysis of glycosylation which has significantly changed the capability to fully characterise molecules of biological interest. A number of different techniques are available which differ in terms of their complexity, the amount of information which is available from them, the skill needed to perform them and their cost. It is now possible for many laboratories who do not specialise in glycosylation analysis to obtain some information although this may be incomplete. These developments do, however, also make complete characterisation of a glycoprotein a much less daunting task and in many cases this can be performed more easily and with less starting material than was previously required. In this review a summary will be given of current techniques and their suitability for different types of analysis will be considered.  相似文献   

8.
The application of mass spectrometry imaging (MS imaging) is rapidly growing with a constantly increasing number of different instrumental systems and software tools. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software. imzML data is divided in two files which are linked by a universally unique identifier (UUID). Experimental details are stored in an XML file which is based on the HUPO-PSI format mzML. Information is provided in the form of a 'controlled vocabulary' (CV) in order to unequivocally describe the parameters and to avoid redundancy in nomenclature. Mass spectral data are stored in a binary file in order to allow efficient storage. imzML is supported by a growing number of software tools. Users will be no longer limited to proprietary software, but are able to use the processing software best suited for a specific question or application. MS imaging data from different instruments can be converted to imzML and displayed with identical parameters in one software package for easier comparison. All technical details necessary to implement imzML and additional background information is available at www.imzml.org.  相似文献   

9.
The number of well-dated pollen diagrams in Europe has increased considerably over the last 30 years and many of them have been submitted to the European Pollen Database (EPD). This allows for the construction of increasingly precise maps of Holocene vegetation change across the continent. Chronological information in the EPD has been expressed in uncalibrated radiocarbon years, and most chronologies to date are based on this time scale. Here we present new chronologies for most of the datasets stored in the EPD based on calibrated radiocarbon years. Age information associated with pollen diagrams is often derived from the pollen stratigraphy itself or from other sedimentological information. We reviewed these chronological tie points and assigned uncertainties to them. The steps taken to generate the new chronologies are described and the rationale for a new classification system for age uncertainties is introduced. The resulting chronologies are fit for most continental-scale questions. They may not provide the best age model for particular sites, but may be viewed as general purpose chronologies. Taxonomic particularities of the data stored in the EPD are explained. An example is given of how the database can be queried to select samples with appropriate age control as well as the suitable taxonomic level to answer a specific research question.  相似文献   

10.
11.
The approach documented in this article reviews data from earlier process validation lifecycle stages with a described statistical model to provide the “best estimate” on the number of process performance qualification (PPQ) batches that should generate sufficient information to make a scientific and risk-based decision on product robustness. This approach is based upon estimation of a statistical confidence from the current product knowledge (Stage 1), historical variability for similar products/processes (batch-to-batch), and label claim specifications such as strength. The analysis is to determine the confidence level with the measurements of the product quality attributes and to compare them with the specifications. The projected minimum number of PPQ batches required will vary depending on the product, process understanding, and attributes, which are critical input parameters for the current statistical model. This new approach considers the critical finished product CQAs (assay, dissolution, and content uniformity), primarily because assay/content uniformity and dissolution as well as strength are the components of the label claim. The key CQAs determine the number of PPQ batches. This approach will ensure that sufficient scientific data is generated to demonstrate process robustness as desired by the 2011 FDA guidance.  相似文献   

12.
13.
Motivation: DNA microarrays are a well-known and established technology in biological and pharmaceutical research providing a wealth of information essential for understanding biological processes and aiding drug development. Protein microarrays are quickly emerging as a follow-up technology, which will also begin to experience rapid growth as the challenges in protein to spot methodologies are overcome. Like DNA microarrays, their protein counterparts produce large amounts of data that must be suitably analyzed in order to yield meaningful information that should eventually lead to novel drug targets and biomarkers. Although the statistical management of DNA microarray data has been well described, there is no available report that offers a successful consolidated approach to the analysis of high-throughput protein microarray data. We describe the novel application of a statistical methodology to analyze the data from an immune response profiling assay using human protein microarray with over 5000 proteins on each chip.  相似文献   

14.
An initiative to increase biopharmaceutical research productivity by capturing, sharing and computationally integrating proprietary scientific discoveries with public knowledge is described. This initiative involves both organisational process change and multiple interoperating software systems. The software components rely on mutually supporting integration techniques. These include a richly structured ontology, statistical analysis of experimental data against stored conclusions, natural language processing of public literature, secure document repositories with lightweight metadata, web services integration, enterprise web portals and relational databases. This approach has already begun to increase scientific productivity in our enterprise by creating an organisational memory (OM) of internal research findings, accessible on the web. Through bringing together these components it has also been possible to construct a very large and expanding repository of biological pathway information linked to this repository of findings which is extremely useful in analysis of DNA microarray data. This repository, in turn, enables our research paradigm to be shifted towards more comprehensive systems-based understandings of drug action.  相似文献   

15.
The paper demonstrates that it is possible to construct memory models where the information inserted is stored in disseminated form, using sequential coding, the changes in the units forming the models being determined by their geometrical connections and by the incoming stream of information. The models are shown to have large storage capacity and their efficiency can be made insensitive to loss of or damage to a large fraction of their units. The satisfactory verification by computer simulation of the analysis and results described in the present paper will be the subject of a future paper.  相似文献   

16.
Exactly when during evolution hominids acquired their extended extra-uterine growth period is a contentious issue. In order to shed light on the tempo and mode of ontogenetic changes during hominid evolution, research has focused on the pattern and, to a lesser extent, the rate of growth observed in the developing dentition of extant and extinct hominoid taxa. From these data, the absolute timing of events has often been inferred, either implicitly or explicitly. Differences in patterns of growth, especially of the eruption of teeth, are reasonably well documented among hominoids. However, data on the absolute timing of dental developmental events are much more scarce, rendering tentative all inferences about timing from patterns alone. Such inferences are even more tentative when they involve interpreting ontogenetic trajectories in extinct species such as Plio-Pleistocene hominids, which almost certainly had unique patterns of maturation. In order to contribute to the debate about possible relations between pattern and timing in the developing dentition, we have collated information that specifically relates to the absolute timing of developmental events in extant and extinct hominoids and, hence, also to the rate at which processes occur. In doing so, we have attempted to identify both developmental constraints and possible heterochronic processes that may have led to the extended growth period characteristic of humans. There appears to be growing evidence that evolution toward an extended hominid ontogeny did not follow a path that can be described as a simple heterochronic event.  相似文献   

17.
The 2010 International Conference on Bioinformatics, InCoB2010, which is the annual conference of the Asia-Pacific Bioinformatics Network (APBioNet) has agreed to publish conference papers in compliance with the proposed Minimum Information about a Bioinformatics investigation (MIABi), proposed in June 2009. Authors of the conference supplements in BMC Bioinformatics, BMC Genomics and Immunome Research have consented to cooperate in this process, which will include the procedures described herein, where appropriate, to ensure data and software persistence and perpetuity, database and resource re-instantiability and reproducibility of results, author and contributor identity disambiguation and MIABi-compliance. Wherever possible, datasets and databases will be submitted to depositories with standardized terminologies. As standards are evolving, this process is intended as a prelude to the 100 BioDatabases (BioDB100) initiative whereby APBioNet collaborators will contribute exemplar databases to demonstrate the feasibility of standards-compliance and participate in refining the process for peer-review of such publications and validation of scientific claims and standards compliance. This testbed represents another step in advancing standards-based processes in the bioinformatics community which is essential to the growing interoperability of biological data, information, knowledge and computational resources.  相似文献   

18.
Goal, Scope and Background  The EU 5th framework project OMNIITOX will develop models calculating characterisation factors for assessing the potential toxic impacts of chemicals within the framework of LCA. These models will become accessible through a web-based information system. The key objective of the OMNIITOX project is to increase the coverage of substances by such models. In order to reach this objective, simpler models which need less but available data, will have to be developed while maintaining scientific quality. Methods. Experience within the OMNIITOX project has taught that data availability and quality are crucial issues for calculating characterisation factors. Data availability determines whether calculating characterisation factors is possible at all, whereas data quality determines to what extent the resulting characterisation factors are reliable. Today, there is insufficient knowledge and/or resources to have high data availability as well as high data quality and high model quality at the same time. Results  The OMNIITOX project is developing two inter-related models in order to be able to provide LCA impact assessment characterisation factors for toxic releases for as broad a range of chemicals as possible: 1) A base model representing a state-of-the-art multimedia model and 2) a simple model derived from the base model using statistical tools. Discussion. A preliminary decision tree for using the OMNIITOX information system (IS) is presented. The decision tree aims to illustrate how the OMNIITOX IS can assist an LCA practitioner in finding or deriving characterisation factors for use in life cycle impact assessment of toxic releases. Conclusions and Outlook  Data availability and quality are crucial issues when calculating characterisation factors for the toxicity impact categories. The OMNIITOX project is developing a tiered model approach for this. It is foreseen that a first version of the base model will be ready in late summer of 2004, whereas a first version of the simple base model is expected a few months later.  相似文献   

19.
20.
Conventional imagers and almost all vision processes use and rely on theories that are based on the principle of static image-frames. A frame is a 2D matrix that represents the spatial locations of intensities of a scene projected on the imager. The notion of a frame itself is so embedded in machine vision, that it is usually taken for granted that this is how biological systems store light information. This paper presents a biosinpired event-based image formation principle, which output data rely on an asynchronous acquisition process. The generated information is stored in temporal volumes, which size and information depend only on the dynamic content of observed scenes. Practical analysis of such information will shows that the processing of visual information can only be based on a semiotic process. The paper also provides a general definition of the notion of visual features as the interpretation of signs according to different possible readings of the codified visual signal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号