首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Social stimuli are known to both attract and direct our attention, but most research on social attention has been conducted in highly controlled laboratory settings lacking in social context. This study examined the role of social context on viewing behaviour of participants whilst they watched a dynamic social scene, under three different conditions. In two social groups, participants believed they were watching a live webcam of other participants. The socially-engaged group believed they would later complete a group task with the people in the video, whilst the non-engaged group believed they would not meet the people in the scene. In a third condition, participants simply free-viewed the same video with the knowledge that it was pre-recorded, with no suggestion of a later interaction. Results demonstrated that the social context in which the stimulus was viewed significantly influenced viewing behaviour. Specifically, participants in the social conditions allocated less visual attention towards the heads of the actors in the scene and followed their gaze less than those in the free-viewing group. These findings suggest that by underestimating the impact of social context in social attention, researchers risk coming to inaccurate conclusions about how we attend to others in the real world.  相似文献   

2.
Our phenomenal world remains stationary in spite of movements of the eyes, head and body. In addition, we can point or turn to objects in the surroundings whether or not they are in the field of view. In this review, I argue that these two features of experience and behaviour are related. The ability to interact with objects we cannot see implies an internal memory model of the surroundings, available to the motor system. And, because we maintain this ability when we move around, the model must be updated, so that the locations of object memories change continuously to provide accurate directional information. The model thus contains an internal representation of both the surroundings and the motions of the head and body: in other words, a stable representation of space. Recent functional MRI studies have provided strong evidence that this egocentric representation has a location in the precuneus, on the medial surface of the superior parietal cortex. This is a region previously identified with ‘self-centred mental imagery’, so it seems likely that the stable egocentric representation, required by the motor system, is also the source of our conscious percept of a stable world.  相似文献   

3.
Haynes JD  Rees G 《Current biology : CB》2005,15(14):1301-1307
Can the rapid stream of conscious experience be predicted from brain activity alone? Recently, spatial patterns of activity in visual cortex have been successfully used to predict feature-specific stimulus representations for both visible and invisible stimuli. However, because these studies examined only the prediction of static and unchanging perceptual states during extended periods of stimulation, it remains unclear whether activity in early visual cortex can also predict the rapidly and spontaneously changing stream of consciousness. Here, we used binocular rivalry to induce frequent spontaneous and stochastic changes in conscious experience without any corresponding changes in sensory stimulation, while measuring brain activity with fMRI. Using information that was present in the multivariate pattern of responses to stimulus features, we could accurately predict, and therefore track, participants' conscious experience from the fMRI signal alone while it underwent many spontaneous changes. Prediction in primary visual cortex primarily reflected eye-based signals, whereas prediction in higher areas reflected the color of the percept. Furthermore, accurate prediction during binocular rivalry could be established with signals recorded during stable monocular viewing, showing that prediction generalized across viewing conditions and did not require or rely on motor responses. It is therefore possible to predict the dynamically changing time course of subjective experience with only brain activity.  相似文献   

4.

Background

The selection of appropriate frames of reference (FOR) is a key factor in the elaboration of spatial perception and the production of robust interaction with our environment. The extent to which we perceive the head axis orientation (subjective head orientation, SHO) with both accuracy and precision likely contributes to the efficiency of these spatial interactions. A first goal of this study was to investigate the relative contribution of both the visual and egocentric FOR (centre-of-mass) in the SHO processing. A second goal was to investigate humans'' ability to process SHO in various sensory response modalities (visual, haptic and visuo-haptic), and the way they modify the reliance to either the visual or egocentric FORs. A third goal was to question whether subjects combined visual and haptic cues optimally to increase SHO certainty and to decrease the FORs disruption effect.

Methodology/Principal Findings

Thirteen subjects were asked to indicate their SHO while the visual and/or egocentric FORs were deviated. Four results emerged from our study. First, visual rod settings to SHO were altered by the tilted visual frame but not by the egocentric FOR alteration, whereas no haptic settings alteration was observed whether due to the egocentric FOR alteration or the tilted visual frame. These results are modulated by individual analysis. Second, visual and egocentric FOR dependency appear to be negatively correlated. Third, the response modality enrichment appears to improve SHO. Fourth, several combination rules of the visuo-haptic cues such as the Maximum Likelihood Estimation (MLE), Winner-Take-All (WTA) or Unweighted Mean (UWM) rule seem to account for SHO improvements. However, the UWM rule seems to best account for the improvement of visuo-haptic estimates, especially in situations with high FOR incongruence. Finally, the data also indicated that FOR reliance resulted from the application of UWM rule. This was observed more particularly, in the visual dependent subject. Conclusions: Taken together, these findings emphasize the importance of identifying individual spatial FOR preferences to assess the efficiency of our interaction with the environment whilst performing spatial tasks.  相似文献   

5.
Constructing an internal representation of the world from successive visual fixations, i.e. separated by saccadic eye movements, is known as trans-saccadic perception. Research on trans-saccadic perception (TSP) has been traditionally aimed at resolving the problems of memory capacity and visual integration across saccades. In this paper, we review this literature on TSP with a focus on research showing that egocentric measures of the saccadic eye movement can be used to integrate simple object features across saccades, and that the memory capacity for items retained across saccades, like visual working memory, is restricted to about three to four items. We also review recent transcranial magnetic stimulation experiments which suggest that the right parietal eye field and frontal eye fields play a key functional role in spatial updating of objects in TSP. We conclude by speculating on possible cortical mechanisms for governing egocentric spatial updating of multiple objects in TSP.  相似文献   

6.
Dynamic neural processing unrelated to changes in sensory input or motor output is likely to be a hallmark of cognitive operations. Here we show that neural representations of space in parietal cortex are dynamic while monkeys perform a spatial cognitive operation on a static visual stimulus. We recorded neural activity in area 7a during a visual maze task in which monkeys mentally followed a path without moving their eyes. We found that the direction of the followed path could be recovered from neuronal population activity. When the monkeys covertly processed a path that turned, the population representation of path direction shifted in the direction of the turn. This neural population dynamic took place during a period of unchanging visual input and showed characteristics of both serial and parallel processing. The data suggest that the dynamic evolution of parietal neuronal activity is associated with the progression of spatial cognitive operations.  相似文献   

7.
A number of neuroimaging techniques have been employed to understand how visual information is transformed along the visual pathway. Although each technique has spatial and temporal limitations, they can each provide important insights into the visual code. While the BOLD signal of fMRI can be quite informative, the visual code is not static and this can be obscured by fMRI’s poor temporal resolution. In this study, we leveraged the high temporal resolution of EEG to develop an encoding technique based on the distribution of responses generated by a population of real-world scenes. This approach maps neural signals to each pixel within a given image and reveals location-specific transformations of the visual code, providing a spatiotemporal signature for the image at each electrode. Our analyses of the mapping results revealed that scenes undergo a series of nonuniform transformations that prioritize different spatial frequencies at different regions of scenes over time. This mapping technique offers a potential avenue for future studies to explore how dynamic feedforward and recurrent processes inform and refine high-level representations of our visual world.  相似文献   

8.
Motor sequence learning is known to rely on more than a single process. As the skill develops with practice, two different representations of the sequence are formed: a goal representation built under spatial allocentric coordinates and a movement representation mediated through egocentric motor coordinates. This study aimed to explore the influence of daytime sleep (nap) on consolidation of these two representations. Through the manipulation of an explicit finger sequence learning task and a transfer protocol, we show that both allocentric (spatial) and egocentric (motor) representations of the sequence can be isolated after initial training. Our results also demonstrate that nap favors the emergence of offline gains in performance for the allocentric, but not the egocentric representation, even after accounting for fatigue effects. Furthermore, sleep-dependent gains in performance observed for the allocentric representation are correlated with spindle density during non-rapid eye movement (NREM) sleep of the post-training nap. In contrast, performance on the egocentric representation is only maintained, but not improved, regardless of the sleep/wake condition. These results suggest that motor sequence memory acquisition and consolidation involve distinct mechanisms that rely on sleep (and specifically, spindle) or simple passage of time, depending respectively on whether the sequence is performed under allocentric or egocentric coordinates.  相似文献   

9.
Representing the UK's cattle herd as static and dynamic networks   总被引:1,自引:0,他引:1  
Network models are increasingly being used to understand the spread of diseases through sparsely connected populations, with particular interest in the impact of animal movements upon the dynamics of infectious diseases. Detailed data collected by the UK government on the movement of cattle may be represented as a network, where animal holdings are nodes, and an edge is drawn between nodes where a movement of animals has occurred. These network representations may vary from a simple static representation, to a more complex, fully dynamic one where daily movements are explicitly captured. Using stochastic disease simulations, a wide range of network representations of the UK cattle herd are compared. We find that the simpler static network representations are often deficient when compared with a fully dynamic representation, and should therefore be used only with caution in epidemiological modelling. In particular, due to temporal structures within the dynamic network, static networks consistently fail to capture the predicted epidemic behaviour associated with dynamic networks even when parameterized to match early growth rates.  相似文献   

10.
One of the major functions of vision is to allow for an efficient and active interaction with the environment. In this study, we investigate the capacity of human observers to extract visual information from observation of their own actions, and those of others, from different viewpoints. Subjects discriminated the size of objects by observing a point-light movie of a hand reaching for an invisible object. We recorded real reach-and-grasp actions in three-dimensional space towards objects of different shape and size, to produce two-dimensional 'point-light display' movies, which were used to measure size discrimination for reach-and-grasp motion sequences, release-and-withdraw sequences and still frames, all in egocentric and allocentric perspectives. Visual size discrimination from action was significantly better in egocentric than in allocentric view, but only for reach-and-grasp motion sequences: release-and-withdraw sequences or still frames derived no advantage from egocentric viewing. The results suggest that the system may have access to an internal model of action that contributes to calibrate visual sense of size for an accurate grasp.  相似文献   

11.
Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches); the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples). Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1) the probe's similarity to the remembered list items (summed similarity), and (2) the similarity between the items in memory (inter-item homogeneity). A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory.  相似文献   

12.
Different spatial representations are not stored as a single multipurpose map in the brain. Right brain-damaged patients can show a distortion, a compression of peripersonal and extrapersonal space. Here we report the case of a patient with a right insulo-thalamic disconnection without spatial neglect. The patient, compared with 10 healthy control subjects, showed a constant and reliable increase of her peripersonal and extrapersonal egocentric space representations - that we named spatial hyperschematia - yet left her allocentric space representations intact. This striking dissociation shows that our interactions with the surrounding world are represented and processed modularly in the human brain, depending on their frame of reference.  相似文献   

13.
Close behavioural coupling of visual orientation may provide a range of adaptive benefits to social species. In order to investigate the natural properties of gaze-following between pedestrians, we displayed an attractive stimulus in a frequently trafficked corridor within which a hidden camera was placed to detect directed gaze from passers-by. The presence of visual cues towards the stimulus by nearby pedestrians increased the probability of passers-by looking as well. In contrast to cueing paradigms used for laboratory research, however, we found that individuals were more responsive to changes in the visual orientation of those walking in the same direction in front of them (i.e. viewing head direction from behind). In fact, visual attention towards the stimulus diminished when oncoming pedestrians had previously looked. Information was therefore transferred more effectively behind, rather than in front of, gaze cues. Further analyses show that neither crowding nor group interactions were driving these effects, suggesting that, within natural settings gaze-following is strongly mediated by social interaction and facilitates acquisition of environmentally relevant information.  相似文献   

14.
The cerebral cortex utilizes spatiotemporal continuity in the world to help build invariant representations. In vision, these might be representations of objects. The temporal continuity typical of objects has been used in an associative learning rule with a short-term memory trace to help build invariant object representations. In this paper, we show that spatial continuity can also provide a basis for helping a system to self-organize invariant representations. We introduce a new learning paradigm “continuous transformation learning” which operates by mapping spatially similar input patterns to the same postsynaptic neurons in a competitive learning system. As the inputs move through the space of possible continuous transforms (e.g. translation, rotation, etc.), the active synapses are modified onto the set of postsynaptic neurons. Because other transforms of the same stimulus overlap with previously learned exemplars, a common set of postsynaptic neurons is activated by the new transforms, and learning of the new active inputs onto the same postsynaptic neurons is facilitated. We demonstrate that a hierarchical model of cortical processing in the ventral visual system can be trained with continuous transform learning, and highlight differences in the learning of invariant representations to those achieved by trace learning.  相似文献   

15.
To act on objects in the world around us, we must first construct an accurate representation of where they are physically located. Recent investigations have begun to shed light on how the brain dynamically binds together visual and somatosensory signals to create task-dependent representations that maintain object constancy.  相似文献   

16.
Manousakis E 《Bio Systems》2012,109(2):115-125
We have carried out binocular rivalry experiments with a large number of subjects to obtain high quality statistics on probability distribution of dominance duration (PDDD) for two cases where (a) the rival stimulus is continuously presented and (b) the rival stimulus is periodically removed, with stimulus-on and stimulus-off intervals T(on) and T(off) respectively. In the present study we have chosen to study the regime of relatively long stimulus-on time, i.e., T(on)> 1s, where the stimulus presentation duration is significantly longer than the human reaction and recognition time. In the case of periodically removed stimulus, the total probability for percept reversal during each of the successive stimulus-on intervals T(on) can be predicted using the PDDD for continuous viewing. More importantly, this total probability for percept reversal during any stimulus-on interval is independent of the length T(off) of the preceding blank time, which can be quite long. We argue that this suggests that, in the regime of long T(on) and T(off) considered here, the variables representing the perceptual state do not change significantly during long blank intervals. We discuss that these findings impose challenges to theoretical models which aim at describing visual perception.  相似文献   

17.
 The spatial distortion hypothesis is one of several theories that explain certain aspects of neglect in patients with right parietal lesions. To determine whether a distorted representation of space can account for the performance of neglect patients in different visuospatial tasks, we asked 26 neglect patients to: (1) bisect horizontal lines and (2) to compare the width of two horizontally aligned bars. A simple mathematical model compatible with the idea of a stationary distortion of represented space in egocentric coordinates explained the results of the line-bisection task. A second model that had basically the same structure and was compatible with the idea of a distorted egocentric representation based on a dynamic remapping of space approximated the size-comparison data. These results support the view that abnormalities observed in the line-bisection and size-comparison tasks are due to a distorted internal representation of the external world. Certain findings suggest that this distortion could be based on a dynamic mapping of space determined by the distribution of visuospatial attention. Received: 14 June 1999 / Accepted in revised form: 30 May 2001  相似文献   

18.
It is clear that humans have mental representations of their spatial environments and that these representations are useful, if not essential, in a wide variety of cognitive tasks such as identification of landmarks and objects, guiding actions and navigation and in directing spatial awareness and attention. Determining the properties of mental representation has long been a contentious issue (see Pinker, 1984). One method of probing the nature of human representation is by studying the extent to which representation can surpass or go beyond the visual (or sensory) experience from which it derives. From a strictly empiricist standpoint what is not sensed cannot be represented; except as a combination of things that have been experienced. But perceptual experience is always limited by our view of the world and the properties of our visual system. It is therefore not surprising when human representation is found to be highly dependent on the initial viewpoint of the observer and on any shortcomings thereof. However, representation is not a static entity; it evolves with experience. The debate as to whether human representation of objects is view-dependent or view-invariant that has dominated research journals recently may simply be a discussion concerning how much information is available in the retinal image during experimental tests and whether this information is sufficient for the task at hand. Here we review an approach to the study of the development of human spatial representation under realistic problem solving scenarios. This is facilitated by the use of realistic virtual environments, exploratory learning and redundancy in visual detail.  相似文献   

19.
For animals in dynamic habitats, the contribution of passive (i.e. by wind or current) and active (movements by the animals themselves) displacement determines whether their space use reflects physical or adaptive behavioural processes. Polar bears in the Barents Sea undertake extensive annual migrations in a habitat that is highly dynamic because of continuous sea ice drift. Using combined information from satellite telemetry, satellite images and atmospheric pressure recordings, we estimated the contribution of sea ice drift and movements in the monthly net displacement of female polar bears. We found that movements, and thus behavioural processes, were dominant. Net displacement was directed northwards during summer ice retreat and southwards during winter ice advance. Conversely, movements were directed northwards counteracting a continuous southward drift. Acting as a treadmill, ice drift probably increased the energetic cost of migrations relative to that expected from observed net displacement distances; this suggests that pelagic and adjacent near-shore bears, on stable land-fast ice, have different energy costs. Little concordance between ice drift rates and net displacement and movement rates suggest that polar bears do not adjust their displacement relative to attractive areas with fixed locations, but rather adjust their movements to local habitat suitability. Furthermore, selective use of less dynamic drift ice when with cubs-of-the-year, and use of terrestrial denning areas, appear to be behavioural adaptations to the dynamics of the Barents Sea drift ice. Hence, understanding the behaviour and ecology of animals inhabiting dynamic habitats necessitates incorporation of both dynamic and static habitat variables. Copyright 2003 Published by Elsevier Ltd on behalf of The Association for the Study of Animal Behaviour.   相似文献   

20.
There has long been a problem concerning the presence in the visual cortex of binocularly activated cells that are selective for vertical stimulus disparities because it is generally believed that only horizontal disparities contribute to stereoscopic depth perception. The accepted view is that stereoscopic depth estimates are only relative to the fixation point and that independent information from an extraretinal source is needed to scale for absolute or egocentric distance. Recently, however, theoretical computations have shown that egocentric distance can be estimated directly from vertical disparities without recourse to extraretinal sources. There has been little impetus to follow up these computations with experimental observations, because the vertical disparities that normally occur between the images in the two eyes have always been regarded as being too small to be of significance for visual perception and because experiments have consistently shown that our conscious appreciation of egocentric distance is rather crude and unreliable. Nevertheless, the veridicality of stereoscopic depth constancy indicates that accurate distance information is available to the visual system and that the information about egocentric distance and horizontal disparity are processed together so as to continually recalibrate the horizontal disparity values for different absolute distances. Computations show that the recalibration can be based directly on vertical disparities without the need for any intervening estimates of absolute distance. This may partly explain the relative crudity of our conscious appreciation of egocentric distance. From published data it has been possible to calculate the magnitude of the vertical disparities that the human visual system must be able to discriminate in order for depth constancy to have the observed level of veridicality. From published data on the induced effect it has also been possible to calculate the threshold values for the detection of vertical disparities by the visual system. These threshold values are smaller than those needed to provide for the recalibration of the horizontal disparities in the interests of veridical depth constancy. An outline is given of the known properties of the binocularly activated cells in the striate cortex that are able to discriminate and assess the vertical disparities. Experiments are proposed that should validate, or otherwise, the concepts put forward in this paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号