首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We have developed two algorithms that construct a simultaneous functional order in a collection of neural elements using purely functional relations. The input of the first algorithm is a matrix describing the total of covariances of signals carried by the members of the neural collection. The second algorithm proceeds from a matrix describing a primitive inclusion relation among the members of the neural collection that can be determined from coincidences in their signal activity. From this information both algorithms compute a partial functional order in the collection of neural elements. Such an order has an objective existence for the system itself and not only for an external observer. By either merging individual neurons or recruiting previously unspecified ones the partial order is locally transformed into a lattice order. Thus, the simultaneous functional order in a nervous net may become isomorphic with a geometrical order if the system has eneough internal coherence. Simulation experiments were done, both for the neuron-merging and the neuron-recruitment routines, to study the number of individuals in the resulting lattice order as a function of the number of individuals in the underlying partially ordered set.  相似文献   

2.
The posterior parietal cortex has long been considered an ''association'' area that combines information from different sensory modalities to form a cognitive representation of space. However, until recently little has been known about the neural mechanisms responsible for this important cognitive process. Recent experiments from the author''s laboratory indicate that visual, somatosensory, auditory and vestibular signals are combined in areas LIP and 7a of the posterior parietal cortex. The integration of these signals can represent the locations of stimuli with respect to the observer and within the environment. Area MSTd combines visual motion signals, similar to those generated during an observer''s movement through the environment, with eye-movement and vestibular signals. This integration appears to play a role in specifying the path on which the observer is moving. All three cortical areas combine different modalities into common spatial frames by using a gain-field mechanism. The spatial representations in areas LIP and 7a appear to be important for specifying the locations of targets for actions such as eye movements or reaching; the spatial representation within area MSTd appears to be important for navigation and the perceptual stability of motion signals.  相似文献   

3.
Summary Organisms react to objective properties of bodies in their visual field instead of to the perpetually changing retinal images of those bodies. We show how such a faculty can be mechanized. The organism synthesizes an internal model of the external object, that is a bundle of expectations of how the visual input will transform in response to the organism's exploratory movements. We deduce the necessary structure of the internal model and we show how the organism can extract this structure from the invariant features of the sensory input transformations. With this internal model it is possible to predict the subsequent aspects (contours) of the visual object as the spatial relations of organism and object change.  相似文献   

4.
The functional order of a collection of neural elements may be defined as the order induced through the total of covariances of signals carried by the members of the collection. Thus functional order differs from geometrical order (e.g. somatotopy) in that geometrical order is only available to external observers, whereas functional order is available to the system itself. It has been shown before that the covariances can be used to construct a partially ordered set that explicitely represents the functional order. It is demonstrated that certain constraints, if satisfied, make this set isomorphic with certain geometrical entities such as triangulations. For instance there may exist a set of hyperspheres in a n-dimensional space with overlap relations that are described with the same partially ordered set as that which describes the simultaneous/successive order of signals in a nerve. Thus it is logically possible that the optic nerve carries (functionally) two-dimensional signals, quite apart from anatomical considerations (e.g. the geometrically two-dimensional structure of the retina which exists only to external observers). The dimension of the modality defined by a collection of nervous elements can in principle be obtained from a cross-correlation analysis of multi-unit recordings without any resort to geometrical data such as somatotopic mappings.  相似文献   

5.
In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain.  相似文献   

6.
Topographic maps are a fundamental and ubiquitous feature of the sensory and motor regions of the brain. There is less evidence for the existence of conventional topographic maps in associational areas of the brain such as the prefrontal cortex and parietal cortex. The existence of topographically arranged anatomical projections is far more widespread and occurs in associational regions of the brain as well as sensory and motor regions: this points to a more widespread existence of topographically organised maps within associational cortex than currently recognised. Indeed, there is increasing evidence that abstract topographic representations may also occur in these regions. For example, a topographic mnemonic map of visual space has been described in the dorsolateral prefrontal cortex and topographically arranged visuospatial attentional signals have been described in parietal association cortex. This article explores how abstract representations might be extracted from sensory topographic representations and subsequently code abstract information. Finally a simple model is presented that shows how abstract topographic representations could be integrated with other information within the brain to solve problems or form abstract associations. The model uses correlative firing to detect associations between different types of stimuli. It is flexible because it can produce correlations between information represented in a topographic or non-topographic coordinate system. It is proposed that a similar process could be used in high-level cognitive operations such as learning and reasoning.  相似文献   

7.
Much evidence has accumulated to suggest that many animals, including young human infants, possess an abstract sense of approximate quantity, a number sense. Most research has concentrated on apparent numerosity of spatial arrays of dots or other objects, but a truly abstract sense of number should be capable of encoding the numerosity of any set of discrete elements, however displayed and in whatever sensory modality. Here, we use the psychophysical technique of adaptation to study the sense of number for serially presented items. We show that numerosity of both auditory and visual sequences is greatly affected by prior adaptation to slow or rapid sequences of events. The adaptation to visual stimuli was spatially selective (in external, not retinal coordinates), pointing to a sensory rather than cognitive process. However, adaptation generalized across modalities, from auditory to visual and vice versa. Adaptation also generalized across formats: adapting to sequential streams of flashes affected the perceived numerosity of spatial arrays. All these results point to a perceptual system that transcends vision and audition to encode an abstract sense of number in space and in time.  相似文献   

8.
The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual-auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation.  相似文献   

9.
Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.  相似文献   

10.
The functional order of a collection of nervous elements is available to the system itself, as opposed to the anatomical geometrical order which exists only for external observers. It has been shown before (Part I) that covariances or coincidences in the signal activity of a neural net can be used in the construction of a simultaneous functional order in which a modality is represented as a concatenation of districts with a lattice structure. In this paper we will show how the resulting functional order in a nervous net can be related to the geometry of the underlying detector array. In particular, we will present an algorithm to construct an abstract geometrical complex from this functional order. The algebraic structure of this complex reflects the topological and geometrical structure of the underlying detector array. We will show how the activated subcomplexes of a complex can be related to segments of the detector array that are activated by the projection of a stimulus pattern. The homology of an abstract complex (and therefore of all of its subcomplexes) can be obtained from simple combinatorial operations on its coincidence scheme. Thus, both the geometry of a detector array and the topology of projections of stimulus patterns may have an objective existence for the neural system itself.  相似文献   

11.
Evidence from human neuroimaging and animal electrophysiological studies suggests that signals from different sensory modalities interact early in cortical processing, including in primary sensory cortices. The present study aimed to test whether functional near-infrared spectroscopy (fNIRS), an emerging, non-invasive neuroimaging technique, is capable of measuring such multisensory interactions. Specifically, we tested for a modulatory influence of sounds on activity in visual cortex, while varying the temporal synchrony between trains of transient auditory and visual events. Related fMRI studies have consistently reported enhanced activation in response to synchronous compared to asynchronous audiovisual stimulation. Unexpectedly, we found that synchronous sounds significantly reduced the fNIRS response from visual cortex, compared both to asynchronous sounds and to a visual-only baseline. It is possible that this suppressive effect of synchronous sounds reflects the use of an efficacious visual stimulus, chosen for consistency with previous fNIRS studies. Discrepant results may also be explained by differences between studies in how attention was deployed to the auditory and visual modalities. The presence and relative timing of sounds did not significantly affect performance in a simultaneously conducted behavioral task, although the data were suggestive of a positive relationship between the strength of the fNIRS response from visual cortex and the accuracy of visual target detection. Overall, the present findings indicate that fNIRS is capable of measuring multisensory cortical interactions. In multisensory research, fNIRS can offer complementary information to the more established neuroimaging modalities, and may prove advantageous for testing in naturalistic environments and with infant and clinical populations.  相似文献   

12.
Private communication may benefit signalers by reducing the costs imposed by potential eavesdroppers such as parasites, predators, prey, or rivals. It is likely that private communication channels are influenced by the evolution of signalers, intended receivers, and potential eavesdroppers, but most studies only examine how private communication benefits signalers. Here, we address this shortcoming by examining visual private communication from a potential eavesdropper’s perspective. Specifically, we ask if a signaler would face fitness consequences if a potential eavesdropper could detect its signal more clearly. By integrating studies on private communication with those on the evolution of vision, we suggest that published studies find few taxon-based constraints that could keep potential eavesdroppers from detecting most hypothesized forms of visual private communication. However, we find that private signals may persist over evolutionary time if the benefits of detecting a particular signal do not outweigh the functional costs a potential eavesdropper would suffer from evolving the ability to detect it. We also suggest that all undetectable signals are not necessarily private signals: potential eavesdroppers may not benefit from detecting a signal if it co-occurs with signals in other more detectable sensory modalities. In future work, we suggest that researchers consider how the evolution of potential eavesdroppers’ sensory systems influences private communication. Specifically, we suggest that examining the fitness correlates and evolution of potential eavesdroppers can help (1) determine the likelihood that private communication channels are stable over evolutionary time, and (2) demonstrate that undetectable signals are private signals by showing that signalers benefit from a reduction in detection by potential eavesdroppers.  相似文献   

13.
Stimuli from different sensory modalities are thought to be processed initially in distinct unisensory brain areas prior to convergence in multisensory areas. However, signals in one modality can influence the processing of signals from other modalities and recent studies suggest this cross-modal influence may occur early on, even in ‘unisensory’ areas. Some recent psychophysical studies have shown specific cross-modal effects between touch and vision during binocular rivalry, but these cannot completely rule out a response bias. To test for genuine cross-modal integration of haptic and visual signals, we investigated whether congruent haptic input could influence visual contrast sensitivity compared to incongruent haptic input in three psychophysical experiments using a two-interval, two-alternative forced-choice method to eliminate response bias. The initial experiment demonstrated that contrast thresholds for a visual grating were lower when exploring a haptic grating that shared the same orientation compared to an orthogonal orientation. Two subsequent experiments mapped the orientation and spatial frequency tunings for the congruent haptic facilitation of vision, finding a clear orientation tuning effect but not a spatial frequency tuning. In addition to an increased contrast sensitivity for iso-oriented visual-haptic gratings, we found a significant loss of sensitivity for orthogonally oriented visual-haptic gratings. We conclude that the tactile influence on vision is a result of a tactile input to orientation-tuned visual areas.  相似文献   

14.
Does becoming aware of a change to a purely visual stimulus necessarily cause the observer to be able to identify or localise the change or can change detection occur in the absence of identification or localisation? Several theories of visual awareness stress that we are aware of more than just the few objects to which we attend. In particular, it is clear that to some extent we are also aware of the global properties of the scene, such as the mean luminance or the distribution of spatial frequencies. It follows that we may be able to detect a change to a visual scene by detecting a change to one or more of these global properties. However, detecting a change to global property may not supply us with enough information to accurately identify or localise which object in the scene has been changed. Thus, it may be possible to reliably detect the occurrence of changes without being able to identify or localise what has changed. Previous attempts to show that this can occur with natural images have produced mixed results. Here we use a novel analysis technique to provide additional evidence that changes can be detected in natural images without also being identified or localised. It is likely that this occurs by the observers monitoring the global properties of the scene.  相似文献   

15.
This article elaborates an Amazonian conception of the common and the challenge it poses to Western thinking about individualism and equality. It is suggested that a number of distinctive features of Amazonian Urarina sociality may have their basis in a shared refusal of factors that give rise to relations of equivalence between people. This kind of singularism, or ‘individualism without individuals’, results from an orientation to the common as a collective resource that is antithetical to property, in which subjectivity is shaped in relation to wider ecological and affective resources that are continuously and collectively produced. This embraces not only shared economic resources, such as land or game animals, but also ways of organizing and producing affective, cognitive, and linguistic relations, ‘commonalities’ of various kinds which never reduce differences to an abstract subject, such as the individual of liberalism or the collective of socialism.  相似文献   

16.
Whether fundamental visual attributes, such as color, motion, and shape, are analyzed separately in specialized pathways has been one of the central questions of visual neuroscience. Although recent studies have revealed various forms of cross-attribute interactions, including significant contributions of color signals to motion processing, it is still widely believed that color perception is relatively independent of motion processing. Here, we report a new color illusion, motion-induced color mixing, in which moving bars, the color of each of which alternates between two colors (e.g., red and green), are perceived as the mixed color (e.g., yellow) even though the two colors are never superimposed on the retina. The magnitude of color mixture is significantly stronger than that expected from direction-insensitive spatial integration of color signals. This illusion cannot be ascribed to optical image blurs, including those induced by chromatic aberration, or to involuntary eye movements of the observer. Our findings indicate that color signals are integrated not only at the same retinal location, but also along a motion trajectory. It is possible that this neural mechanism helps us to see veridical colors for moving objects by reducing motion blur, as in the case of luminance-based pattern perception.  相似文献   

17.
It is clear that humans have mental representations of their spatial environments and that these representations are useful, if not essential, in a wide variety of cognitive tasks such as identification of landmarks and objects, guiding actions and navigation and in directing spatial awareness and attention. Determining the properties of mental representation has long been a contentious issue (see Pinker, 1984). One method of probing the nature of human representation is by studying the extent to which representation can surpass or go beyond the visual (or sensory) experience from which it derives. From a strictly empiricist standpoint what is not sensed cannot be represented; except as a combination of things that have been experienced. But perceptual experience is always limited by our view of the world and the properties of our visual system. It is therefore not surprising when human representation is found to be highly dependent on the initial viewpoint of the observer and on any shortcomings thereof. However, representation is not a static entity; it evolves with experience. The debate as to whether human representation of objects is view-dependent or view-invariant that has dominated research journals recently may simply be a discussion concerning how much information is available in the retinal image during experimental tests and whether this information is sufficient for the task at hand. Here we review an approach to the study of the development of human spatial representation under realistic problem solving scenarios. This is facilitated by the use of realistic virtual environments, exploratory learning and redundancy in visual detail.  相似文献   

18.
Yu HB  Shou TD 《生理学报》2000,52(5):411-415
采用基于内源信号的脑光学成像方法,在大范围视皮层研究了不同空间拓扑位置对应的皮层区的对光栅刺激空间频率反应特性。结果表明,周边视野对应区对高空间频率刺激反应极弱或没有反应,中心视野对应区对较宽的空间频率范围内的刺激均有反应,但对高频刺激反应更强;无论在周边对应区还是中心对应区,其视野越靠近中心,其空间频率调谐曲线和截止空间频率越靠近高频,而且这种过渡是平缓的。以上结果说明,猫初级视皮层空间频率反应  相似文献   

19.
Auditory information is processed in a fine-to-crude hierarchical scheme, from low-level acoustic information to high-level abstract representations, such as phonological labels. We now ask whether fine acoustic information, which is not retained at high levels, can still be used to extract speech from noise. Previous theories suggested either full availability of low-level information or availability that is limited by task difficulty. We propose a third alternative, based on the Reverse Hierarchy Theory (RHT), originally derived to describe the relations between the processing hierarchy and visual perception. RHT asserts that only the higher levels of the hierarchy are immediately available for perception. Direct access to low-level information requires specific conditions, and can be achieved only at the cost of concurrent comprehension. We tested the predictions of these three views in a series of experiments in which we measured the benefits from utilizing low-level binaural information for speech perception, and compared it to that predicted from a model of the early auditory system. Only auditory RHT could account for the full pattern of the results, suggesting that similar defaults and tradeoffs underlie the relations between hierarchical processing and perception in the visual and auditory modalities.  相似文献   

20.
Auditory information is processed in a fine-to-crude hierarchical scheme, from low-level acoustic information to high-level abstract representations, such as phonological labels. We now ask whether fine acoustic information, which is not retained at high levels, can still be used to extract speech from noise. Previous theories suggested either full availability of low-level information or availability that is limited by task difficulty. We propose a third alternative, based on the Reverse Hierarchy Theory (RHT), originally derived to describe the relations between the processing hierarchy and visual perception. RHT asserts that only the higher levels of the hierarchy are immediately available for perception. Direct access to low-level information requires specific conditions, and can be achieved only at the cost of concurrent comprehension. We tested the predictions of these three views in a series of experiments in which we measured the benefits from utilizing low-level binaural information for speech perception, and compared it to that predicted from a model of the early auditory system. Only auditory RHT could account for the full pattern of the results, suggesting that similar defaults and tradeoffs underlie the relations between hierarchical processing and perception in the visual and auditory modalities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号