首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Motor behaviors require animals to coordinate neural activity across different areas within their motor system. In particular, the significant processing delays within the motor system must somehow be compensated for. Internal models of the motor system, in particular the forward model, have emerged as important potential mechanisms for compensation. For motor responses directed at moving visual objects, there is, additionally, a problem of delays within the sensory pathways carrying crucial position information. The visual phenomenon known as the flash-lag effect has led to a motion-extrapolation model for compensation of sensory delays. In the flash-lag effect, observers see a flashed item colocalized with a moving item as lagging behind the moving item. Here, we explore the possibility that the internal forward model and the motion-extrapolation model are analogous mechanisms compensating for neural delays in the motor and the visual system, respectively. In total darkness, observers moved their right hand gripping a rod while a visual flash was presented at various positions in relation to the rod. When the flash was aligned with the rod, observers perceived it in a position lagging behind the instantaneous felt position of the invisible rod. These results suggest that compensation of neural delays for time-varying motor behavior parallels compensation of delays for time-varying visual stimulation.  相似文献   

2.
Shi Z  Nijhawan R 《PloS one》2012,7(3):e33651
Neural transmission latency would introduce a spatial lag when an object moves across the visual field, if the latency was not compensated. A visual predictive mechanism has been proposed, which overcomes such spatial lag by extrapolating the position of the moving object forward. However, a forward position shift is often absent if the object abruptly stops moving (motion-termination). A recent "correction-for-extrapolation" hypothesis suggests that the absence of forward shifts is caused by sensory signals representing 'failed' predictions. Thus far, this hypothesis has been tested only for extra-foveal retinal locations. We tested this hypothesis using two foveal scotomas: scotoma to dim light and scotoma to blue light. We found that the perceived position of a dim dot is extrapolated into the fovea during motion-termination. Next, we compared the perceived position shifts of a blue versus a green moving dot. As predicted the extrapolation at motion-termination was only found with the blue moving dot. The results provide new evidence for the correction-for-extrapolation hypothesis for the region with highest spatial acuity, the fovea.  相似文献   

3.
The mechanism of positional localization has recently been debated due to interest in the flash-lag effect, which occurs when a briefly flashed stationary stimulus is perceived to lag behind a spatially aligned moving stimulus. Here we report positional localization observed at motion offsets as well as at onsets. In the 'flash-lead' effect, a moving object is perceived to be behind a spatially concurrent stationary flash before the two disappear. With 'reverse-repmo', subjects mis-localize the final position of a moving bar in the direction opposite to the trajectory of motion. Finally, we demonstrate that simultaneous onset and offset effects lead to a perceived compression of visual space. By characterizing illusory effects observed at motion offsets as well as at onsets, we provide evidence that the perceived position of a moving object is the result of an averaging process over a short time period, weighted towards the most recent positions. Our account explains a variety of motion illusions, including the compression of moving shapes when viewed through apertures.  相似文献   

4.
Born RT  Groh JM  Zhao R  Lukasewycz SJ 《Neuron》2000,26(3):725-734
To track a moving object, its motion must first be distinguished from that of the background. The center-surround properties of neurons in the middle temporal visual area (MT) may be important for signaling the relative motion between object and background. To test this, we microstimulated within MT and measured the effects on monkeys' eye movements to moving targets. We found that stimulation at "local motion" sites, where receptive fields possessed antagonistic surrounds, shifted pursuit in the preferred direction of the neurons, whereas stimulation at "wide-field motion" sites shifted pursuit in the opposite, or null, direction. We propose that activating wide-field sites simulated background motion, thus inducing a target motion signal in the opposite direction. Our results support the hypothesis that neuronal center-surround mechanisms contribute to the behavioral segregation of objects from the background.  相似文献   

5.
Eye movements constitute one of the most basic means of interacting with our environment, allowing to orient to, localize and scrutinize the variety of potentially interesting objects that surround us. In this review we discuss the role of the parietal cortex in the control of saccadic and smooth pursuit eye movements, whose purpose is to rapidly displace the line of gaze and to maintain a moving object on the central retina, respectively. From single cell recording studies in monkey we know that distinct sub-regions of the parietal lobe are implicated in these two kinds of movement. The middle temporal (MT) and medial superior temporal (MST) areas show neuronal activities related to moving visual stimuli and to ocular pursuit. The lateral intraparietal (LIP) area exhibits visual and saccadic neuronal responses. Electrophysiology, which in essence is a correlation method, cannot entirely solve the question of the functional implication of these areas: are they primarily involved in sensory processing, in motor processing, or in some intermediate function? Lesion approaches (reversible or permanent) in the monkey can provide important information in this respect. Lesions of MT or MST produce deficits in the perception of visual motion, which would argue for their possible role in sensory guidance of ocular pursuit rather than in directing motor commands to the eye muscle. Lesions of LIP do not produce specific visual impairments and cause only subtle saccadic deficits. However, recent results have shown the presence of severe deficits in spatial attention tasks. LIP could thus be implicated in the selection of relevant objects in the visual scene and provide a signal for directing the eyes toward these objects. Functional imaging studies in humans confirm the role of the parietal cortex in pursuit, saccadic, and attentional networks, and show a high degree of overlap with monkey data. Parietal lobe lesions in humans also result in behavioral deficits very similar to those that are observed in the monkey. Altogether, these different sources of data consistently point to the involvement of the parietal cortex in the representation of space, at an intermediate stage between vision and action.  相似文献   

6.
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations.  相似文献   

7.
While it was initially thought that attention was space-based, more recent work has shown that attention can also be object-based, in that observers find it easier to attend to different parts of the same object than to different parts of different objects. Such studies have shown that attention more easily spreads throughout an object than between objects. However, it is not known to what extent attention can be confined to just part of an object and to what extent attending to part of an object necessarily causes the entire object to be attended. We have investigated this question in the context of the multiple object tracking paradigm in which subjects are shown a scene containing a number of identical moving objects and asked to mentally track a subset of them, the targets, while not tracking the remainder, the distractors. Previous work has shown that joining each target to a distractor by a solid connector so that each target-distractor pair forms a single physical object, a technique known as target-distractor merging, makes it hard to track the targets, suggesting that attention cannot be restricted to just parts of objects. However, in that study the target-distractor pairs continuously changed length, which in itself would have made tracking difficult. Here we show that it remains difficult to track the targets even when the target-distractor pairs do not change length and even when the targets can be differentiated from the connectors that join them to the distractors. Our experiments suggest that it is hard to confine attention to just parts of objects, at least in the case of moving objects.  相似文献   

8.
Expertise in recognizing objects in cluttered scenes is a critical skill for our interactions in complex environments and is thought to develop with learning. However, the neural implementation of object learning across stages of visual analysis in the human brain remains largely unknown. Using combined psychophysics and functional magnetic resonance imaging (fMRI), we show a link between shape-specific learning in cluttered scenes and distributed neuronal plasticity in the human visual cortex. We report stronger fMRI responses for trained than untrained shapes across early and higher visual areas when observers learned to detect low-salience shapes in noisy backgrounds. However, training with high-salience pop-out targets resulted in lower fMRI responses for trained than untrained shapes in higher occipitotemporal areas. These findings suggest that learning of camouflaged shapes is mediated by increasing neural sensitivity across visual areas to bolster target segmentation and feature integration. In contrast, learning of prominent pop-out shapes is mediated by associations at higher occipitotemporal areas that support sparser coding of the critical features for target recognition. We propose that the human brain learns novel objects in complex scenes by reorganizing shape processing across visual areas, while taking advantage of natural image correlations that determine the distinctiveness of target shapes.  相似文献   

9.
Motion detection is an essential biological property of vertebral brain. In order to localize moving objects exactly, intrinsic time delays of the neuronal network must be compensated for. Invariance of position with regard to velocity of a stimulus due to a negative spatial shift is one option for compensation. Experimental results found in the present study support the view that negative spatial shift occurs in the visual cortex of the cat and the tectum of the frog. An order of 30% of the visual neurons may be suited to compensate intrinsic time delays.  相似文献   

10.
Transmission of neural signals in the brain takes time due to the slow biological mechanisms that mediate it. During such delays, the position of moving objects can change substantially. The brain could use statistical regularities in the natural world to compensate neural delays and represent moving stimuli closer to real time. This possibility has been explored in the context of the flash lag illusion, where a briefly flashed stimulus in alignment with a moving one appears to lag behind the moving stimulus. Despite numerous psychophysical studies, the neural mechanisms underlying the flash lag illusion remain poorly understood, partly because it has never been studied electrophysiologically in behaving animals. Macaques are a prime model for such studies, but it is unknown if they perceive the illusion. By training monkeys to report their percepts unbiased by reward, we show that they indeed perceive the illusion qualitatively similar to humans. Importantly, the magnitude of the illusion is smaller in monkeys than in humans, but it increases linearly with the speed of the moving stimulus in both species. These results provide further evidence for the similarity of sensory information processing in macaques and humans and pave the way for detailed neurophysiological investigations of the flash lag illusion in behaving macaques.  相似文献   

11.
Active spatial perception in the vibrissa scanning sensorimotor system   总被引:2,自引:1,他引:1  
Haptic perception is an active process that provides an awareness of objects that are encountered as an organism scans its environment. In contrast to the sensation of touch produced by contact with an object, the perception of object location arises from the interpretation of tactile signals in the context of the changing configuration of the body. A discrete sensory representation and a low number of degrees of freedom in the motor plant make the ethologically prominent rat vibrissa system an ideal model for the study of the neuronal computations that underlie this perception. We found that rats with only a single vibrissa can combine touch and movement to distinguish the location of objects that vary in angle along the sweep of vibrissa motion. The patterns of this motion and of the corresponding behavioral responses show that rats can scan potential locations and decide which location contains a stimulus within 150 ms. This interval is consistent with just one to two whisk cycles and provides constraints on the underlying perceptual computation. Our data argue against strategies that do not require the integration of sensory and motor modalities. The ability to judge angular position with a single vibrissa thus connects previously described, motion-sensitive neurophysiological signals to perception in the behaving animal.  相似文献   

12.
Visually targeted reaching to a specific object is a demanding neuronal task requiring the translation of the location of the object from a two-dimensionsal set of retinotopic coordinates to a motor pattern that guides a limb to that point in three-dimensional space. This sensorimotor transformation has been intensively studied in mammals, but was not previously thought to occur in animals with smaller nervous systems such as insects. We studied horse-head grasshoppers (Orthoptera: Proscopididae) crossing gaps and found that visual inputs are sufficient for them to target their forelimbs to a foothold on the opposite side of the gap. High-speed video analysis showed that these reaches were targeted accurately and directly to footholds at different locations within the visual field through changes in forelimb trajectory and body position, and did not involve stereotyped searching movements. The proscopids estimated distant locations using peering to generate motion parallax, a monocular distance cue, but appeared to use binocular visual cues to estimate the distance of nearby footholds. Following occlusion of regions of binocular overlap, the proscopids resorted to peering to target reaches even to nearby locations. Monocular cues were sufficient for accurate targeting of the ipsilateral but not the contralateral forelimb. Thus, proscopids are capable not only of the sensorimotor transformations necessary for visually targeted reaching with their forelimbs but also of flexibly using different visual cues to target reaches.  相似文献   

13.
Coordinated eye-head movements evoked by the presentation of visual, auditory and combined audio-visual targets were studied in 24 human subjects. At 60 deg located targets latencies of eye and head movements were shorter for auditory than for visual stimuli. Latencies were shorter for bisensory than for monosensory targets. The eye and head latencies were differently influenced by the modality of the stimulus when the eccentricity of the target was changed, but not by the variation of the stimulus duration. The different responses of the eye and the head depending on target modality and target eccentricity can be partially attributed to perceptual and central processing mechanisms, and are important to answer the question about the initial event in coordinated eye-head orientation.  相似文献   

14.
We use visual information to guide our grasping movements. When grasping an object with a precision grip, the two digits need to reach two different positions more or less simultaneously, but the eyes can only be directed to one position at a time. Several studies that have examined eye movements in grasping have found that people tend to direct their gaze near where their index finger will contact the object. Here we aimed at better understanding why people do so by asking participants to lift an object off a horizontal surface. They were to grasp the object with a precision grip while movements of their hand, eye and head were recorded. We confirmed that people tend to look closer to positions that a digit needs to reach more accurately. Moreover, we show that where they look as they reach for the object depends on where they were looking before, presumably because they try to minimize the time during which the eyes are moving so fast that no new visual information is acquired. Most importantly, we confirmed that people have a bias to direct gaze towards the index finger’s contact point rather than towards that of the thumb. In our study, this cannot be explained by the index finger contacting the object before the thumb. Instead, it appears to be because the index finger moves to a position that is hidden behind the object that is grasped, probably making this the place at which one is most likely to encounter unexpected problems that would benefit from visual guidance. However, this cannot explain the bias that was found in previous studies, where neither contact point was hidden, so it cannot be the only explanation for the bias.  相似文献   

15.
Visual saliency is a fundamental yet hard to define property of objects or locations in the visual world. In a context where objects and their representations compete to dominate our perception, saliency can be thought of as the "juice" that makes objects win the race. It is often assumed that saliency is extracted and represented in an explicit saliency map, which serves to determine the location of spatial attention at any given time. It is then by drawing attention to a salient object that it can be recognized or categorized. I argue against this classical view that visual "bottom-up" saliency automatically recruits the attentional system prior to object recognition. A number of visual processing tasks are clearly performed too fast for such a costly strategy to be employed. Rather, visual attention could simply act by biasing a saliency-based object recognition system. Under natural conditions of stimulation, saliency can be represented implicitly throughout the ventral visual pathway, independent of any explicit saliency map. At any given level, the most activated cells of the neural population simply represent the most salient locations. The notion of saliency itself grows increasingly complex throughout the system, mostly based on luminance contrast until information reaches visual cortex, gradually incorporating information about features such as orientation or color in primary visual cortex and early extrastriate areas, and finally the identity and behavioral relevance of objects in temporal cortex and beyond. Under these conditions the object that dominates perception, i.e. the object yielding the strongest (or the first) selective neural response, is by definition the one whose features are most "salient"--without the need for any external saliency map. In addition, I suggest that such an implicit representation of saliency can be best encoded in the relative times of the first spikes fired in a given neuronal population. In accordance with our subjective experience that saliency and attention do not modify the appearance of objects, the feed-forward propagation of this first spike wave could serve to trigger saliency-based object recognition outside the realm of awareness, while conscious perceptions could be mediated by the remaining discharges of longer neuronal spike trains.  相似文献   

16.
C Hudson  PD Howe  DR Little 《PloS one》2012,7(8):e43796
In everyday life, we often need to attentively track moving objects. A previous study has claimed that this tracking occurs independently in the left and right visual hemifields (Alvarez & Cavanagh, 2005, Psychological Science,16, 637–647). Specifically, it was shown that observers were much more accurate at tracking objects that were spread over both visual hemifields as opposed to when all were confined to a single visual hemifield. In that study, observers were not required to remember the identities of the objects. Conversely, in real life, there is seldom any benefit to tracking an object unless you can also recall its identity. It has been predicted that when observers are required to remember the identities of the tracked objects a bilateral advantage should no longer be observed (Oksama & Hyönä, 2008, Cognitive Psychology, 56, 237–283). We tested this prediction and found that a bilateral advantage still occurred, though it was not as strong as when observers were not required to remember the identities of the targets. Even in the later case we found that tracking was not completely independent in the two visual hemifields. We present a combined model of multiple object tracking and multiple identity tracking that can explain our data.  相似文献   

17.
Mazza V  Caramazza A 《PloS one》2011,6(2):e17453
The ability to process concurrently multiple visual objects is fundamental for a coherent perception of the world. A core component of this ability is the simultaneous individuation of multiple objects. Many studies have addressed the mechanism of object individuation but it remains unknown whether the visual system mandatorily individuates all relevant elements in the visual field, or whether object indexing depends on task demands. We used a neural measure of visual selection, the N2pc component, to evaluate the flexibility of multiple object individuation. In three ERP experiments, participants saw a variable number of target elements among homogenous distracters and performed either an enumeration task (Experiment 1) or a detection task, reporting whether at least one (Experiment 2) or a specified number of target elements (Experiment 3) was present. While in the enumeration task the N2pc response increased as a function of the number of targets, no such modulation was found in Experiment 2, indicating that individuation of multiple targets is not mandatory. However, a modulation of the N2pc similar to the enumeration task was visible in Experiment 3, further highlighting that object individuation is a flexible mechanism that binds indexes to object properties and locations as needed for further object processing.  相似文献   

18.
Can nonhuman animals attend to visual stimuli as whole, coherent objects? We investigated this question by adapting for use with pigeons a task in which human participants must report whether two visual attributes belong to the same object (one-object trial) or to different objects (two-object trial). We trained pigeons to discriminate a pair of differently colored shapes that had two targets either on a single object or on two different objects. Each target equally often appeared on the one-object and two-object stimuli; therefore, a specific target location could not serve as a discriminative cue. The pigeons learned to report whether the two target dots were located on a single object or on two different objects; follow-up tests demonstrated that this ability was not entirely based on memorization of the dot patterns and locations. Additional tests disclosed predominate stimulus control by the color, but not by the shape of the two objects. These findings suggest that human psychophysical methods are readily applicable to the study of object discrimination by nonhuman animals.  相似文献   

19.
The central program of a targeted movement includes a component intended for to compensate for the weight of the arm; this is why the accuracy of pointing to a memorized position of the visual target in darkness depends on orientation of the moving limb in relation to the vertical axis. Transition from the vertical to the horizontal body position is accompanied by a shift of the final hand position along the body axis towards the head. We studied how pointing errors and visual localization of the target are modified due to adaptation to the horizontal body position; targeted movements to a real target were repeatedly performed during the adaptation period. Three types of experiments were performed: a basic experiment, and two different experiments with adaptation realized under somewhat dissimilar conditions. In the course of the first adaptation experiment, subjects received no visual information on the hand’s position in space, and targeted movements of the arm to a luminous target could be corrected using proprioceptive information only. With such a paradigm, the accuracy of pointing to memorized visual targets showed no adaptation-related changes. In the second adaptation experiment, subjects were allowed to continuously view a marker (a light-emitting diode taped to the fingertip). After such adaptation practice, the accuracy of pointing movements to memorized targets increased: both constant and variational errors, as well as both components of constant error (i.e.,X andY errors) significantly dropped. Testing the accuracy of visual localization of the targets by visual/verbal adjustment, performed after this adaptation experiment, showed that the pattern of errors did not change compared with that in the basic experiment. Therefore, we can conclude that sensorimotor adaptation to the horizontal position develops much more successfully when the subject obtains visual information about the working point position; such adaptation is not related to modifications in the system of visual localization of the target.  相似文献   

20.
The tiger beetle larva shows two distinct visual responses, a predatory jump and an evasive withdrawal into the burrow (escape). In the present study the visual stimuli controlling these two responses have been behaviorally analyzed in the larva of Cicindela chinensis. The threshold size needed for a target to elicit both responses is a visual angle of 5–7°. The velocities of moving targets needed to elicit the responses are 0.4–33° s−1 for the jump and 0.76–90° s−1 for the escape. Choice between the two responses appears to be controlled by the actual target size rather than by the angular size. It also appears to be controlled by the target height. As the height of the target increases, the probability for the jump decreases, whereas the probability for the escape increases. Response properties of the larva with only a single functional stemma, the other stemmata being occluded, are different from those of the intact larva, which suggests cooperation of at least two stemmata for the release of different visual responses. Visual responses of the one-stemma larva still vary, however, with target size and target height, which suggests the visual responses are partially controlled even by a single stemma. Although our data do not resolve these conflicting results, more than one stemma is necessary for a firm choice between the two responses. Accepted: 13 May 1997  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号