首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents Integrated Information Theory (IIT) of consciousness 3.0, which incorporates several advances over previous formulations. IIT starts from phenomenological axioms: information says that each experience is specific – it is what it is by how it differs from alternative experiences; integration says that it is unified – irreducible to non-interdependent components; exclusion says that it has unique borders and a particular spatio-temporal grain. These axioms are formalized into postulates that prescribe how physical mechanisms, such as neurons or logic gates, must be configured to generate experience (phenomenology). The postulates are used to define intrinsic information as “differences that make a difference” within a system, and integrated information as information specified by a whole that cannot be reduced to that specified by its parts. By applying the postulates both at the level of individual mechanisms and at the level of systems of mechanisms, IIT arrives at an identity: an experience is a maximally irreducible conceptual structure (MICS, a constellation of concepts in qualia space), and the set of elements that generates it constitutes a complex. According to IIT, a MICS specifies the quality of an experience and integrated information ΦMax its quantity. From the theory follow several results, including: a system of mechanisms may condense into a major complex and non-overlapping minor complexes; the concepts that specify the quality of an experience are always about the complex itself and relate only indirectly to the external environment; anatomical connectivity influences complexes and associated MICS; a complex can generate a MICS even if its elements are inactive; simple systems can be minimally conscious; complicated systems can be unconscious; there can be true “zombies” – unconscious feed-forward systems that are functionally equivalent to conscious complexes.  相似文献   

2.
Whether some animal species possess consciousness is no longer the question; rather how their environment and evolution shaped species‐specific forms of self‐awareness Subject Categories: Ecology, Neuroscience

Ever since humans acknowledged consciousness in themselves, they speculated whether animals could have a similar sentience or awareness of their internal and external existence. But although human philosophers had pondered on consciousness for centuries, it was not until 1927 when the American psychologist Harvey Carr laid the foundations for research on animal consciousness. He argued that awareness in animals could be only understood and measured when we had developed “an accurate and complete knowledge of its essential conditions in man” (Carr, 1927).This may have provided a springboard for the field, but a definition of the essential conditions of consciousness in Homo sapiens has proved elusive to this day—hence, research on animal consciousness has struggled to achieve a sound basis for formulating and evaluating testable hypotheses. However, there has been some progress in developing correlates of human consciousness that can be applied to study animals, while brain scanning and imaging has recently allowed comparative studies of human and animal neurological activity while performing mental tasks. It has also become possible to observe animal behaviour and communication in much greater depth and identify examples of activities—such as advance planning, or recognition of individuals through their vocalization—that can be associated with human consciousness. Overall, there is a growing consensus that this research has moved beyond merely questioning whether animals can be conscious or aware of themselves to defining different dimensions along which this can be assessed.
… research has moved beyond merely questioning whether animals can be conscious or aware of themselves to defining different dimensions along which this can be assessed.
  相似文献   

3.
Vertebrate nervous systems can generate a remarkable diversity of behaviors. However, our understanding of how behaviors may have evolved in the chordate lineage is limited by the lack of neuroethological studies leveraging our closest invertebrate relatives. Here, we combine high-throughput video acquisition with pharmacological perturbations of bioamine signaling to systematically reveal the global structure of the motor behavioral repertoire in the Ciona intestinalis larvae. Most of Ciona’s postural variance can be captured by 6 basic shapes, which we term “eigencionas.” Motif analysis of postural time series revealed numerous stereotyped behavioral maneuvers including “startle-like” and “beat-and-glide.” Employing computational modeling of swimming dynamics and spatiotemporal embedding of postural features revealed that behavioral differences are generated at the levels of motor modules and the transitions between, which may in part be modulated by bioamines. Finally, we show that flexible motor module usage gives rise to diverse behaviors in response to different light stimuli.

Vertebrate nervous systems can generate a remarkable diversity of behaviors, but how did these evolve in the chordate lineage? A study of the protochordate Ciona intestinalis reveals novel insights into how a simple chordate brain uses neuromodulators to control its behavioral repertoire.  相似文献   

4.
As the technique of percutaneous lung biopsy continues to evolve, it offers an increasingly accurate method of establishing the malignancy or benignity of a solitary pulmonary nodule. There are relatively few contraindications to the procedure, and the complications—primarily pneumothorax and hemoptysis—generally resolve without therapy. Transthoracic needle aspiration has an important role in the workup for a “coin lesion.” Other elements of the diagnostic workup—particularly the history, a chest roentgenogram, computed tomography, sputum cytology, and transbronchial brush biopsy—may either add to or substitute for a transthoracic needle aspiration biopsy. An algorithm can be used to guide the diagnostic approach to a solitary pulmonary nodule.  相似文献   

5.
The science of consciousness has made great strides by focusing on the behavioural and neuronal correlates of experience. However, while such correlates are important for progress to occur, they are not enough if we are to understand even basic facts, for example, why the cerebral cortex gives rise to consciousness but the cerebellum does not, though it has even more neurons and appears to be just as complicated. Moreover, correlates are of little help in many instances where we would like to know if consciousness is present: patients with a few remaining islands of functioning cortex, preterm infants, non-mammalian species and machines that are rapidly outperforming people at driving, recognizing faces and objects, and answering difficult questions. To address these issues, we need not only more data but also a theory of consciousness—one that says what experience is and what type of physical systems can have it. Integrated information theory (IIT) does so by starting from experience itself via five phenomenological axioms: intrinsic existence, composition, information, integration and exclusion. From these it derives five postulates about the properties required of physical mechanisms to support consciousness. The theory provides a principled account of both the quantity and the quality of an individual experience (a quale), and a calculus to evaluate whether or not a particular physical system is conscious and of what. Moreover, IIT can explain a range of clinical and laboratory findings, makes a number of testable predictions and extrapolates to a number of problematic conditions. The theory holds that consciousness is a fundamental property possessed by physical systems having specific causal properties. It predicts that consciousness is graded, is common among biological organisms and can occur in some very simple systems. Conversely, it predicts that feed-forward networks, even complex ones, are not conscious, nor are aggregates such as groups of individuals or heaps of sand. Also, in sharp contrast to widespread functionalist beliefs, IIT implies that digital computers, even if their behaviour were to be functionally equivalent to ours, and even if they were to run faithful simulations of the human brain, would experience next to nothing.  相似文献   

6.
Repetitive-element PCR (rep-PCR) is a method for genotyping bacteria based on the selective amplification of repetitive genetic elements dispersed throughout bacterial chromosomes. The method has great potential for large-scale epidemiological studies because of its speed and simplicity; however, objective guidelines for inferring relationships among bacterial isolates from rep-PCR data are lacking. We used multilocus sequence typing (MLST) as a “gold standard” to optimize the analytical parameters for inferring relationships among Escherichia coli isolates from rep-PCR data. We chose 12 isolates from a large database to represent a wide range of pairwise genetic distances, based on the initial evaluation of their rep-PCR fingerprints. We conducted MLST with these same isolates and systematically varied the analytical parameters to maximize the correspondence between the relationships inferred from rep-PCR and those inferred from MLST. Methods that compared the shapes of densitometric profiles (“curve-based” methods) yielded consistently higher correspondence values between data types than did methods that calculated indices of similarity based on shared and different bands (maximum correspondences of 84.5% and 80.3%, respectively). Curve-based methods were also markedly more robust in accommodating variations in user-specified analytical parameter values than were “band-sharing coefficient” methods, and they enhanced the reproducibility of rep-PCR. Phylogenetic analyses of rep-PCR data yielded trees with high topological correspondence to trees based on MLST and high statistical support for major clades. These results indicate that rep-PCR yields accurate information for inferring relationships among E. coli isolates and that accuracy can be enhanced with the use of analytical methods that consider the shapes of densitometric profiles.  相似文献   

7.
Most common methods for inferring transposable element (TE) evolutionary relationships are based on dividing TEs into subfamilies using shared diagnostic nucleotides. Although originally justified based on the “master gene” model of TE evolution, computational and experimental work indicates that many of the subfamilies generated by these methods contain multiple source elements. This implies that subfamily-based methods give an incomplete picture of TE relationships. Studies on selection, functional exaptation, and predictions of horizontal transfer may all be affected. Here, we develop a Bayesian method for inferring TE ancestry that gives the probability that each sequence was replicative, its frequency of replication, and the probability that each extant TE sequence came from each possible ancestral sequence. Applying our method to 986 members of the newly-discovered LAVA family of TEs, we show that there were far more source elements in the history of LAVA expansion than subfamilies identified using the CoSeg subfamily-classification program. We also identify multiple replicative elements in the AluSc subfamily in humans. Our results strongly indicate that a reassessment of subfamily structures is necessary to obtain accurate estimates of mutation processes, phylogenetic relationships and historical times of activity.  相似文献   

8.
Lightness illusions are fundamental to human perception, and yet why we see them is still the focus of much research. Here we address the question by modelling not human physiology or perception directly as is typically the case but our natural visual world and the need for robust behaviour. Artificial neural networks were trained to predict the reflectance of surfaces in a synthetic ecology consisting of 3-D “dead-leaves” scenes under non-uniform illumination. The networks learned to solve this task accurately and robustly given only ambiguous sense data. In addition—and as a direct consequence of their experience—the networks also made systematic “errors” in their behaviour commensurate with human illusions, which includes brightness contrast and assimilation—although assimilation (specifically White's illusion) only emerged when the virtual ecology included 3-D, as opposed to 2-D scenes. Subtle variations in these illusions, also found in human perception, were observed, such as the asymmetry of brightness contrast. These data suggest that “illusions” arise in humans because (i) natural stimuli are ambiguous, and (ii) this ambiguity is resolved empirically by encoding the statistical relationship between images and scenes in past visual experience. Since resolving stimulus ambiguity is a challenge faced by all visual systems, a corollary of these findings is that human illusions must be experienced by all visual animals regardless of their particular neural machinery. The data also provide a more formal definition of illusion: the condition in which the true source of a stimulus differs from what is its most likely (and thus perceived) source. As such, illusions are not fundamentally different from non-illusory percepts, all being direct manifestations of the statistical relationship between images and scenes.  相似文献   

9.
Parametric methods for identifying laterally transferred genes exploit the directional mutational biases unique to each genome. Yet the development of new, more robust methods—as well as the evaluation and proper implementation of existing methods—relies on an arbitrary assessment of performance using real genomes, where the evolutionary histories of genes are not known. We have used the framework of a generalized hidden Markov model to create artificial genomes modeled after genuine genomes. To model a genome, “core” genes—those displaying patterns of mutational biases shared among large numbers of genes—are identified by a novel gene clustering approach based on the Akaike information criterion. Gene models derived from multiple “core” gene clusters are used to generate an artificial genome that models the properties of a genuine genome. Chimeric artificial genomes—representing those having experienced lateral gene transfer—were created by combining genes from multiple artificial genomes, and the performance of the parametric methods for identifying “atypical” genes was assessed directly. We found that a hidden Markov model that included multiple gene models, each trained on sets of genes representing the range of genotypic variability within a genome, could produce artificial genomes that mimicked the properties of genuine genomes. Moreover, different methods for detecting foreign genes performed differently—i.e., they had different sets of strengths and weaknesses—when identifying atypical genes within chimeric artificial genomes.  相似文献   

10.
Morphospaces—representations of phenotypic characteristics—are often populated unevenly, leaving large parts unoccupied. Such patterns are typically ascribed to contingency, or else to natural selection disfavoring certain parts of the morphospace. The extent to which developmental bias, the tendency of certain phenotypes to preferentially appear as potential variation, also explains these patterns is hotly debated. Here we demonstrate quantitatively that developmental bias is the primary explanation for the occupation of the morphospace of RNA secondary structure (SS) shapes. Upon random mutations, some RNA SS shapes (the frequent ones) are much more likely to appear than others. By using the RNAshapes method to define coarse-grained SS classes, we can directly compare the frequencies that noncoding RNA SS shapes appear in the RNAcentral database to frequencies obtained upon a random sampling of sequences. We show that: 1) only the most frequent structures appear in nature; the vast majority of possible structures in the morphospace have not yet been explored; 2) remarkably small numbers of random sequences are needed to produce all the RNA SS shapes found in nature so far; and 3) perhaps most surprisingly, the natural frequencies are accurately predicted, over several orders of magnitude in variation, by the likelihood that structures appear upon a uniform random sampling of sequences. The ultimate cause of these patterns is not natural selection, but rather a strong phenotype bias in the RNA genotype–phenotype map, a type of developmental bias or “findability constraint,” which limits evolutionary dynamics to a hugely reduced subset of structures that are easy to “find.”  相似文献   

11.
Understanding the assembly processes of symbiont communities, including viromes and microbiomes, is important for improving predictions on symbionts’ biogeography and disease ecology. Here, we use phylogenetic, functional, and geographic filters to predict the similarity between symbiont communities, using as a test case the assembly process in viral communities of Mexican bats. We construct generalized linear models to predict viral community similarity, as measured by the Jaccard index, as a function of differences in host phylogeny, host functionality, and spatial co‐occurrence, evaluating the models using the Akaike information criterion. Two model classes are constructed: a “known” model, where virus–host relationships are based only on data reported in Mexico, and a “potential” model, where viral reports of all the Americas are used, but then applied only to bat species that are distributed in Mexico. Although the “known” model shows only weak dependence on any of the filters, the “potential” model highlights the importance of all three filter types—phylogeny, functional traits, and co‐occurrence—in the assemblage of viral communities. The differences between the “known” and “potential” models highlight the utility of modeling at different “scales” so as to compare and contrast known information at one scale to another one, where, for example, virus information associated with bats is much scarcer.  相似文献   

12.
Scholarly collaborations across disparate scientific disciplines are challenging. Collaborators are likely to have their offices in another building, attend different conferences, and publish in other venues; they might speak a different scientific language and value an alien scientific culture. This paper presents a detailed analysis of success and failure of interdisciplinary papers—as manifested in the citations they receive. For 9.2 million interdisciplinary research papers published between 2000 and 2012 we show that the majority (69.9%) of co-cited interdisciplinary pairs are “win-win” relationships, i.e., papers that cite them have higher citation impact and there are as few as 3.3% “lose-lose” relationships. Papers citing references from subdisciplines positioned far apart (in the conceptual space of the UCSD map of science) attract the highest relative citation counts. The findings support the assumption that interdisciplinary research is more successful and leads to results greater than the sum of its disciplinary parts.  相似文献   

13.
Renewed efforts in tuberculosis (TB) research have led to important new insights into the biology and epidemiology of this devastating disease. Yet, in the face of the modern epidemics of HIV/AIDS, diabetes, and multidrug resistance—all of which contribute to susceptibility to TB—global control of the disease will remain a formidable challenge for years to come. New high-throughput genomics technologies are already contributing to studies of TB''s epidemiology, comparative genomics, evolution, and host–pathogen interaction. We argue here, however, that new multidisciplinary approaches—especially the integration of epidemiology with systems biology in what we call “systems epidemiology”—will be required to eliminate TB.  相似文献   

14.
Hypnotic suggestions may change the perceived color of objects. Given that chromatic stimulus information is processed rapidly and automatically by the visual system, how can hypnotic suggestions affect perceived colors in a seemingly immediate fashion? We studied the mechanisms of such color alterations by measuring electroencephalography in two highly suggestible participants as they perceived briefly presented visual shapes under posthypnotic color alternation suggestions such as “all the squares are blue”. One participant consistently reported seeing the suggested colors. Her reports correlated with enhanced evoked upper beta-band activity (22 Hz) 70–120 ms after stimulus in response to the shapes mentioned in the suggestion. This effect was not observed in a control condition where the participants merely tried to simulate the effects of the suggestion on behavior. The second participant neither reported color alterations nor showed the evoked beta activity, although her subjective experience and event-related potentials were changed by the suggestions. The results indicate a preconscious mechanism that first compares early visual input with a memory representation of the suggestion and consequently triggers the color alteration process in response to the objects specified by the suggestion. Conscious color experience is not purely the result of bottom-up processing but it can be modulated, at least in some individuals, by top-down factors such as hypnotic suggestions.  相似文献   

15.
Automatism     
R. J. McCaldon 《CMAJ》1964,91(17):914-920
Individuals can carry out complex activity while in a state of impaired consciousness, a condition termed “automatism”. Consciousness must be considered from both an organic and a psychological aspect, because impairment of consciousness may occur in both ways. Automatism may be classified as normal (hypnosis), organic (temporal lobe epilepsy), psychogenic (dissociative fugue) or feigned. Often painstaking clinical investigation is necessary to clarify the diagnosis. There is legal precedent for assuming that all crimes must embody both consciousness and will. Jurists are loath to apply this principle without reservation, as this would necessitate acquittal and release of potentially dangerous individuals. However, with the sole exception of the defence of insanity, there is at present no legislation to prohibit release without further investigation of anyone acquitted of a crime on the grounds of “automatism”.  相似文献   

16.
Whether the brain operates at a critical “tipping” point is a long standing scientific question, with evidence from both cellular and systems-scale studies suggesting that the brain does sit in, or near, a critical regime. Neuroimaging studies of humans in altered states of consciousness have prompted the suggestion that maintenance of critical dynamics is necessary for the emergence of consciousness and complex cognition, and that reduced or disorganized consciousness may be associated with deviations from criticality. Unfortunately, many of the cellular-level studies reporting signs of criticality were performed in non-conscious systems (in vitro neuronal cultures) or unconscious animals (e.g. anaesthetized rats). Here we attempted to address this knowledge gap by exploring critical brain dynamics in invasive ECoG recordings from multiple sessions with a single macaque as the animal transitioned from consciousness to unconsciousness under different anaesthetics (ketamine and propofol). We use a previously-validated test of criticality: avalanche dynamics to assess the differences in brain dynamics between normal consciousness and both drug-states. Propofol and ketamine were selected due to their differential effects on consciousness (ketamine, but not propofol, is known to induce an unusual state known as “dissociative anaesthesia”). Our analyses indicate that propofol dramatically restricted the size and duration of avalanches, while ketamine allowed for more awake-like dynamics to persist. In addition, propofol, but not ketamine, triggered a large reduction in the complexity of brain dynamics. All states, however, showed some signs of persistent criticality when testing for exponent relations and universal shape-collapse. Further, maintenance of critical brain dynamics may be important for regulation and control of conscious awareness.  相似文献   

17.
Stochastic resonance is said to be observed when increases in levels of unpredictable fluctuations—e.g., random noise—cause an increase in a metric of the quality of signal transmission or detection performance, rather than a decrease. This counterintuitive effect relies on system nonlinearities and on some parameter ranges being “suboptimal”. Stochastic resonance has been observed, quantified, and described in a plethora of physical and biological systems, including neurons. Being a topic of widespread multidisciplinary interest, the definition of stochastic resonance has evolved significantly over the last decade or so, leading to a number of debates, misunderstandings, and controversies. Perhaps the most important debate is whether the brain has evolved to utilize random noise in vivo, as part of the “neural code”. Surprisingly, this debate has been for the most part ignored by neuroscientists, despite much indirect evidence of a positive role for noise in the brain. We explore some of the reasons for this and argue why it would be more surprising if the brain did not exploit randomness provided by noise—via stochastic resonance or otherwise—than if it did. We also challenge neuroscientists and biologists, both computational and experimental, to embrace a very broad definition of stochastic resonance in terms of signal-processing “noise benefits”, and to devise experiments aimed at verifying that random variability can play a functional role in the brain, nervous system, or other areas of biology.  相似文献   

18.
How cognitive task behavior is generated by brain network interactions is a central question in neuroscience. Answering this question calls for the development of novel analysis tools that can firstly capture neural signatures of task information with high spatial and temporal precision (the “where and when”) and then allow for empirical testing of alternative network models of brain function that link information to behavior (the “how”). We outline a novel network modeling approach suited to this purpose that is applied to noninvasive functional neuroimaging data in humans. We first dynamically decoded the spatiotemporal signatures of task information in the human brain by combining MRI-individualized source electroencephalography (EEG) with multivariate pattern analysis (MVPA). A newly developed network modeling approach—dynamic activity flow modeling—then simulated the flow of task-evoked activity over more causally interpretable (relative to standard functional connectivity [FC] approaches) resting-state functional connections (dynamic, lagged, direct, and directional). We demonstrate the utility of this modeling approach by applying it to elucidate network processes underlying sensory–motor information flow in the brain, revealing accurate predictions of empirical response information dynamics underlying behavior. Extending the model toward simulating network lesions suggested a role for the cognitive control networks (CCNs) as primary drivers of response information flow, transitioning from early dorsal attention network-dominated sensory-to-response transformation to later collaborative CCN engagement during response selection. These results demonstrate the utility of the dynamic activity flow modeling approach in identifying the generative network processes underlying neurocognitive phenomena.

How is cognitive task behavior generated by brain network interactions? This study describes a novel network modeling approach and applies it to source electroencephalography data. The model accurately predicts future information dynamics underlying behavior and (via simulated lesioning) suggests a role for cognitive control networks as key drivers of response information flow.  相似文献   

19.
It is widely accepted that the growth and regeneration of tissues and organs is tightly controlled. Although experimental studies are beginning to reveal molecular mechanisms underlying such control, there is still very little known about the control strategies themselves. Here, we consider how secreted negative feedback factors (“chalones”) may be used to control the output of multistage cell lineages, as exemplified by the actions of GDF11 and activin in a self-renewing neural tissue, the mammalian olfactory epithelium (OE). We begin by specifying performance objectives—what, precisely, is being controlled, and to what degree—and go on to calculate how well different types of feedback configurations, feedback sensitivities, and tissue architectures achieve control. Ultimately, we show that many features of the OE—the number of feedback loops, the cellular processes targeted by feedback, even the location of progenitor cells within the tissue—fit with expectations for the best possible control. In so doing, we also show that certain distinctions that are commonly drawn among cells and molecules—such as whether a cell is a stem cell or transit-amplifying cell, or whether a molecule is a growth inhibitor or stimulator—may be the consequences of control, and not a reflection of intrinsic differences in cellular or molecular character.  相似文献   

20.
Continuous directed evolution of enzymes and other proteins in microbial hosts is capable of outperforming classical directed evolution by executing hypermutation and selection concurrently in vivo, at scale, with minimal manual input. Provided that a target enzyme’s activity can be coupled to growth of the host cells, the activity can be improved simply by selecting for growth. Like all directed evolution, the continuous version requires no prior mechanistic knowledge of the target. Continuous directed evolution is thus a powerful way to modify plant or non-plant enzymes for use in plant metabolic research and engineering. Here, we first describe the basic features of the yeast (Saccharomyces cerevisiae) OrthoRep system for continuous directed evolution and compare it briefly with other systems. We then give a step-by-step account of three ways in which OrthoRep can be deployed to evolve primary metabolic enzymes, using a THI4 thiazole synthase as an example and illustrating the mutational outcomes obtained. We close by outlining applications of OrthoRep that serve growing demands (i) to change the characteristics of plant enzymes destined for return to plants, and (ii) to adapt (“plantize”) enzymes from prokaryotes—especially exotic prokaryotes—to function well in mild, plant-like conditions.

Continuous directed evolution using the yeast OrthoRep system is a powerful way to improve enzymes for use in plant engineering as illustrated by “plantizing” a bacterial thiamin synthesis enzyme.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号