首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
The neural representation of motion aftereffects induced by various visual flows (translational, rotational, motion-in-depth, and translational transparent flows) was studied under the hypothesis that the imbalances in discharge activities would occur in favor in the direction opposite to the adapting stimulation in the monkey MST cells (cells in the medial superior temporal area) which can discriminate the mode (i.e., translational, rotational, or motion-in-depth) of the given flow. In single-unit recording experiments conducted on anaesthetized monkeys, we found that the rate of spontaneous discharge and the sensitivity to a test stimulus moving in the preferred direction decreased after receiving an adapting stimulation moving in the preferred direction, whereas they increased after receiving an adapting stimulation moving in the null direction. To consistently explain the bidirectional perception of a transparent visual flow and its unidirectional motion aftereffect by the same hypothesis, we need to assume the existence of two subtypes of MST D cells which show directionally selective responses to a translational flow: component cells and integration cells. Our physiological investigation revealed that the MST D cells could be divided into two types: one responded to a transparent flow by two peaks at the instances when the direction of one of the component flow matched the preferred direction of the cell, and the other responded by a single peak at the instance when the direction of the integrated motion matched the preferred direction. In psychophysical experiments on human subjects, we found evidence for the existence of component and integration representations in the human brain. To explain the different motion perceptions, i.e., two transparent flows during presentation of the flows and a single flow in the opposite direction to the integrated flows after stopping the flow stimuli, we suggest that the pattern-discrimination system can select the motion representation that is consistent with the perception of the pattern from two motion representations. We discuss the computational aspects related to the integration of component motion fields.  相似文献   

2.
Inferior temporal (IT) cortex as the final stage of the ventral visual pathway is involved in visual object recognition. In our everyday life we need to recognize visual objects that are degraded by noise. Psychophysical studies have shown that the accuracy and speed of the object recognition decreases as the amount of visual noise increases. However, the neural representation of ambiguous visual objects and the underlying neural mechanisms of such changes in the behavior are not known. Here, by recording the neuronal spiking activity of macaque monkeys’ IT we explored the relationship between stimulus ambiguity and the IT neural activity. We found smaller amplitude, later onset, earlier offset and shorter duration of the response as visual ambiguity increased. All of these modulations were gradual and correlated with the level of stimulus ambiguity. We found that while category selectivity of IT neurons decreased with noise, it was preserved for a large extent of visual ambiguity. This noise tolerance for category selectivity in IT was lost at 60% noise level. Interestingly, while the response of the IT neurons to visual stimuli at 60% noise level was significantly larger than their baseline activity and full (100%) noise, it was not category selective anymore. The latter finding shows a neural representation that signals the presence of visual stimulus without signaling what it is. In general these findings, in the context of a drift diffusion model, explain the neural mechanisms of perceptual accuracy and speed changes in the process of recognizing ambiguous objects.  相似文献   

3.
Variability is an inherent and important feature of human movement. This variability has form exhibiting a chaotic structure. Visual feedback training using regular predictive visual target motions does not take into account this essential characteristic of the human movement, and may result in task specific learning and loss of visuo-motor adaptability. In this study, we asked how well healthy young adults can track visual target cues of varying degree of complexity during whole-body swaying in the Anterior-Posterior (AP) and Medio-Lateral (ML) direction. Participants were asked to track three visual target motions: a complex (Lorenz attractor), a noise (brown) and a periodic (sine) moving target while receiving online visual feedback about their performance. Postural sway, gaze and target motion were synchronously recorded and the degree of force-target and gaze-target coupling was quantified using spectral coherence and Cross-Approximate entropy. Analysis revealed that both force-target and gaze-target coupling was sensitive to the complexity of the visual stimuli motions. Postural sway showed a higher degree of coherence with the Lorenz attractor than the brown noise or sinusoidal stimulus motion. Similarly, gaze was more synchronous with the Lorenz attractor than the brown noise and sinusoidal stimulus motion. These results were similar regardless of whether tracking was performed in the AP or ML direction. Based on the theoretical model of optimal movement variability tracking of a complex signal may provide a better stimulus to improve visuo-motor adaptation and learning in postural control.  相似文献   

4.
D Cheong  JK Zubieta  J Liu 《PloS one》2012,7(6):e39854
Predicting the trajectories of moving objects in our surroundings is important for many life scenarios, such as driving, walking, reaching, hunting and combat. We determined human subjects' performance and task-related brain activity in a motion trajectory prediction task. The task required spatial and motion working memory as well as the ability to extrapolate motion information in time to predict future object locations. We showed that the neural circuits associated with motion prediction included frontal, parietal and insular cortex, as well as the thalamus and the visual cortex. Interestingly, deactivation of many of these regions seemed to be more closely related to task performance. The differential activity during motion prediction vs. direct observation was also correlated with task performance. The neural networks involved in our visual motion prediction task are significantly different from those that underlie visual motion memory and imagery. Our results set the stage for the examination of the effects of deficiencies in these networks, such as those caused by aging and mental disorders, on visual motion prediction and its consequences on mobility related daily activities.  相似文献   

5.
Perception of a moving visual stimulus can be suppressed or enhanced by surrounding context in adjacent parts of the visual field. We studied the neural processes underlying such contextual modulation with fMRI. We selected motion selective regions of interest (ROI) in the occipital and parietal lobes with sufficiently well defined topography to preclude direct activation by the surround. BOLD signal in the ROIs was suppressed when surround motion direction matched central stimulus direction, and increased when it was opposite. With the exception of hMT+/V5, inserting a gap between the stimulus and the surround abolished surround modulation. This dissociation between hMT+/V5 and other motion selective regions prompted us to ask whether motion perception is closely linked to processing in hMT+/V5, or reflects the net activity across all motion selective cortex. The motion aftereffect (MAE) provided a measure of motion perception, and the same stimulus configurations that were used in the fMRI experiments served as adapters. Using a linear model, we found that the MAE was predicted more accurately by the BOLD signal in hMT+/V5 than it was by the BOLD signal in other motion selective regions. However, a substantial improvement in prediction accuracy could be achieved by using the net activity across all motion selective cortex as a predictor, suggesting the overall conclusion that visual motion perception depends upon the integration of activity across different areas of visual cortex.  相似文献   

6.
To interpret visual scenes, visual systems need to segment or integrate multiple moving features into distinct objects or surfaces. Previous studies have found that the perceived direction separation between two transparently moving random-dot stimuli is wider than the actual direction separation. This perceptual “direction repulsion” is useful for segmenting overlapping motion vectors. Here we investigate the effects of motion noise on the directional interaction between overlapping moving stimuli. Human subjects viewed two overlapping random-dot patches moving in different directions and judged the direction separation between the two motion vectors. We found that the perceived direction separation progressively changed from wide to narrow as the level of motion noise in the stimuli was increased, showing a switch from direction repulsion to attraction (i.e. smaller than the veridical direction separation). We also found that direction attraction occurred at a wider range of direction separations than direction repulsion. The normalized effects of both direction repulsion and attraction were the strongest near the direction separation of ∼25° and declined as the direction separation further increased. These results support the idea that motion noise prompts motion integration to overcome stimulus ambiguity. Our findings provide new constraints on neural models of motion transparency and segmentation.  相似文献   

7.

Background

Vision provides the most salient information with regard to the stimulus motion. However, it has recently been demonstrated that static visual stimuli are perceived as moving laterally by alternating left-right sound sources. The underlying mechanism of this phenomenon remains unclear; it has not yet been determined whether auditory motion signals, rather than auditory positional signals, can directly contribute to visual motion perception.

Methodology/Principal Findings

Static visual flashes were presented at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flash appeared to move by means of the auditory motion when the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the lateral auditory motion altered visual motion perception in a global motion display where different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception.

Conclusions/Significance

These findings suggest there exist direct interactions between auditory and visual motion signals, and that there might be common neural substrates for auditory and visual motion processing.  相似文献   

8.
Modern techniques as ion beam therapy or 4D imaging require precise target position information. However, target motion particularly in the abdomen due to respiration or patient movement is still a challenge and demands methods that detect and compensate this motion. Ultrasound represents a non-invasive, dose-free and model-independent alternative to fluoroscopy, respiration belt or optical tracking of the patient surface. Thus, ultrasound based motion tracking was integrated into irradiation with actively scanned heavy ions. In a first in vitro experiment, the ultrasound tracking system was used to compensate diverse sinusoidal target motions in two dimensions. A time delay of ∼200 ms between target motion and reported position data was compensated by a prediction algorithm (artificial neural network). The irradiated films proved feasibility of the proposed method. Furthermore, a practicable and reliable calibration workflow was developed to enable the transformation of ultrasound tracking data to the coordinates of the treatment delivery or imaging system – even if the ultrasound probe moves due to respiration. A first proof of principle experiment was performed during time-resolved positron emission tomography (4DPET) to test the calibration workflow and to show the accuracy of an ultrasound based motion tracking in vitro. The results showed that optical ultrasound tracking can reach acceptable accuracies and encourage further research.  相似文献   

9.
The middle temporal area of the extrastriate visual cortex (area MT) is integral to motion perception and is thought to play a key role in the perceptual learning of motion tasks. We have previously found, however, that perceptual learning of a motion discrimination task is possible even when the training stimulus contains locally balanced, motion opponent signals that putatively suppress the response of MT. Assuming at least partial suppression of MT, possible explanations for this learning are that 1) training made MT more responsive by reducing motion opponency, 2) MT remained suppressed and alternative visual areas such as V1 enabled learning and/or 3) suppression of MT increased with training, possibly to reduce noise. Here we used fMRI to test these possibilities. We first confirmed that the motion opponent stimulus did indeed suppress the BOLD response within hMT+ compared to an almost identical stimulus without locally balanced motion signals. We then trained participants on motion opponent or non-opponent stimuli. Training with the motion opponent stimulus reduced the BOLD response within hMT+ and greater reductions in BOLD response were correlated with greater amounts of learning. The opposite relationship between BOLD and behaviour was found at V1 for the group trained on the motion-opponent stimulus and at both V1 and hMT+ for the group trained on the non-opponent motion stimulus. As the average response of many cells within MT to motion opponent stimuli is the same as their response to non-directional flickering noise, the reduced activation of hMT+ after training may reflect noise reduction.  相似文献   

10.
Perception relies on the response of populations of neurons in sensory cortex. How the response profile of a neuronal population gives rise to perception and perceptual discrimination has been conceptualized in various ways. Here we suggest that neuronal population responses represent information about our environment explicitly as Fisher information (FI), which is a local measure of the variance estimate of the sensory input. We show how this sensory information can be read out and combined to infer from the available information profile which stimulus value is perceived during a fine discrimination task. In particular, we propose that the perceived stimulus corresponds to the stimulus value that leads to the same information for each of the alternative directions, and compare the model prediction to standard models considered in the literature (population vector, maximum likelihood, maximum-a-posteriori Bayesian inference). The models are applied to human performance in a motion discrimination task that induces perceptual misjudgements of a target direction of motion by task irrelevant motion in the spatial surround of the target stimulus (motion repulsion). By using the neurophysiological insight that surround motion suppresses neuronal responses to the target motion in the center, all models predicted the pattern of perceptual misjudgements. The variation of discrimination thresholds (error on the perceived value) was also explained through the changes of the total FI content with varying surround motion directions. The proposed FI decoding scheme incorporates recent neurophysiological evidence from macaque visual cortex showing that perceptual decisions do not rely on the most active neurons, but rather on the most informative neuronal responses. We statistically compare the prediction capability of the FI decoding approach and the standard decoding models. Notably, all models reproduced the variation of the perceived stimulus values for different surrounds, but with different neuronal tuning characteristics underlying perception. Compared to the FI approach the prediction power of the standard models was based on neurons with far wider tuning width and stronger surround suppression. Our study demonstrates that perceptual misjudgements can be based on neuronal populations encoding explicitly the available sensory information, and provides testable neurophysiological predictions on neuronal tuning characteristics underlying human perceptual decisions.  相似文献   

11.
以Reichardt的相关型初级运动检测器阵列和Rumelhart的误差反传学习(learningbyback-propagatingerrors,BP)网络相结合构成了一个视觉运动感知神经网络,探讨了视觉运动信息的感知过程。试图从计算神经科学的观点来阐明从一推运动分量的检测到二维模式运动感知的神经原理,从而回答运动矢量在脑内如何表征。计算机仿真表明,在有监督学习的条件下,网络可以学会解决局城运动检测所带来的多义性问题,给出模式的真实朝向、运动方向和运动速度。  相似文献   

12.
The aim of this paper is to explore the phenomenon of aperiodic stochastic resonance in neural systems with colored noise. For nonlinear dynamical systems driven by Gaussian colored noise, we prove that the stochastic sample trajectory can converge to the corresponding deterministic trajectory as noise intensity tends to zero in mean square, under global and local Lipschitz conditions, respectively. Then, following forbidden interval theorem we predict the phenomenon of aperiodic stochastic resonance in bistable and excitable neural systems. Two neuron models are further used to verify the theoretical prediction. Moreover, we disclose the phenomenon of aperiodic stochastic resonance induced by correlation time and this finding suggests that adjusting noise correlation might be a biologically more plausible mechanism in neural signal processing.  相似文献   

13.
Motion tracking is a challenge the visual system has to solve by reading out the retinal population. It is still unclear how the information from different neurons can be combined together to estimate the position of an object. Here we recorded a large population of ganglion cells in a dense patch of salamander and guinea pig retinas while displaying a bar moving diffusively. We show that the bar’s position can be reconstructed from retinal activity with a precision in the hyperacuity regime using a linear decoder acting on 100+ cells. We then took advantage of this unprecedented precision to explore the spatial structure of the retina’s population code. The classical view would have suggested that the firing rates of the cells form a moving hill of activity tracking the bar’s position. Instead, we found that most ganglion cells in the salamander fired sparsely and idiosyncratically, so that their neural image did not track the bar. Furthermore, ganglion cell activity spanned an area much larger than predicted by their receptive fields, with cells coding for motion far in their surround. As a result, population redundancy was high, and we could find multiple, disjoint subsets of neurons that encoded the trajectory with high precision. This organization allows for diverse collections of ganglion cells to represent high-accuracy motion information in a form easily read out by downstream neural circuits.  相似文献   

14.
Theories of autism spectrum disorders (ASD) have focused on altered perceptual integration of sensory features as a possible core deficit. Yet, there is little understanding of the neuronal processing of elementary sensory features in ASD. For typically developed individuals, we previously established a direct link between frequency-specific neural activity and the intensity of a specific sensory feature: Gamma-band activity in the visual cortex increased approximately linearly with the strength of visual motion. Using magnetoencephalography (MEG), we investigated whether in individuals with ASD neural activity reflect the coherence, and thus intensity, of visual motion in a similar fashion. Thirteen adult participants with ASD and 14 control participants performed a motion direction discrimination task with increasing levels of motion coherence. A polynomial regression analysis revealed that gamma-band power increased significantly stronger with motion coherence in ASD compared to controls, suggesting excessive visual activation with increasing stimulus intensity originating from motion-responsive visual areas V3, V6 and hMT/V5. Enhanced neural responses with increasing stimulus intensity suggest an enhanced response gain in ASD. Response gain is controlled by excitatory-inhibitory interactions, which also drive high-frequency oscillations in the gamma-band. Thus, our data suggest that a disturbed excitatory-inhibitory balance underlies enhanced neural responses to coherent motion in ASD.  相似文献   

15.
BACKGROUND: The perceptual ability of humans and monkeys to identify objects in the presence of noise varies systematically and monotonically as a function of how much noise is introduced to the visual display. That is, it becomes more and more difficult to identify an object with increasing noise. Here we examine whether the blood oxygen level-dependent functional magnetic resonance imaging (BOLD fMRI) signal in anesthetized monkeys also shows such monotonic tuning. We employed parametric stimulus sets containing natural images and noise patterns matched for spatial frequency and intensity as well as intermediate images generated by interpolation between natural images and noise patterns. Anesthetized monkeys provide us with the unique opportunity to examine visual processing largely in the absence of top-down cognitive modulations and can thus provide an important baseline against which work with awake monkeys and humans can be compared. RESULTS: We measured BOLD activity in occipital visual cortical areas as natural images and noise patterns, as well as intermediate interpolated patterns at three interpolation levels (25%, 50%, and 75%) were presented to anesthetized monkeys in a block paradigm. We observed reliable visual activity in occipital visual areas including V1, V2, V3, V3A, and V4 as well as the fundus and anterior bank of the superior temporal sulcus (STS). Natural images consistently elicited higher BOLD levels than noise patterns. For intermediate images, however, we did not observe monotonic tuning. Instead, we observed a characteristic V-shaped noise-tuning function in primary and extrastriate visual areas. BOLD signals initially decreased as noise was added to the stimulus but then increased again as the pure noise pattern was approached. We present a simple model based on the number of activated neurons and the strength of activation per neuron that can account for these results. CONCLUSIONS: We show that, for our parametric stimulus set, BOLD activity varied nonmonotonically as a function of how much noise was added to the visual stimuli, unlike the perceptual ability of humans and monkeys to identify such stimuli. This raises important caveats for interpreting fMRI data and demonstrates the importance of assessing not only which neural populations are activated by contrasting conditions during an fMRI study, but also the strength of this activation. This becomes particularly important when using the BOLD signal to make inferences about the relationship between neural activity and behavior.  相似文献   

16.
Humans can distinguish visual stimuli that differ by features the size of only a few photoreceptors. This is possible despite the incessant image motion due to fixational eye movements, which can be many times larger than the features to be distinguished. To perform well, the brain must identify the retinal firing patterns induced by the stimulus while discounting similar patterns caused by spontaneous retinal activity. This is a challenge since the trajectory of the eye movements, and consequently, the stimulus position, are unknown. We derive a decision rule for using retinal spike trains to discriminate between two stimuli, given that their retinal image moves with an unknown random walk trajectory. This algorithm dynamically estimates the probability of the stimulus at different retinal locations, and uses this to modulate the influence of retinal spikes acquired later. Applied to a simple orientation-discrimination task, the algorithm performance is consistent with human acuity, whereas naive strategies that neglect eye movements perform much worse. We then show how a simple, biologically plausible neural network could implement this algorithm using a local, activity-dependent gain and lateral interactions approximately matched to the statistics of eye movements. Finally, we discuss evidence that such a network could be operating in the primary visual cortex.  相似文献   

17.

Background

When a moving stimulus and a briefly flashed static stimulus are physically aligned in space the static stimulus is perceived as lagging behind the moving stimulus. This vastly replicated phenomenon is known as the Flash-Lag Effect (FLE). For the first time we employed biological motion as the moving stimulus, which is important for two reasons. Firstly, biological motion is processed by visual as well as somatosensory brain areas, which makes it a prime candidate for elucidating the interplay between the two systems with respect to the FLE. Secondly, discussions about the mechanisms of the FLE tend to recur to evolutionary arguments, while most studies employ highly artificial stimuli with constant velocities.

Methodology/Principal Finding

Since biological motion is ecologically valid it follows complex patterns with changing velocity. We therefore compared biological to symbolic motion with the same acceleration profile. Our results with 16 observers revealed a qualitatively different pattern for biological compared to symbolic motion and this pattern was predicted by the characteristics of motor resonance: The amount of anticipatory processing of perceived actions based on the induced perspective and agency modulated the FLE.

Conclusions/Significance

Our study provides first evidence for an FLE with non-linear motion in general and with biological motion in particular. Our results suggest that predictive coding within the sensorimotor system alone cannot explain the FLE. Our findings are compatible with visual prediction (Nijhawan, 2008) which assumes that extrapolated motion representations within the visual system generate the FLE. These representations are modulated by sudden visual input (e.g. offset signals) or by input from other systems (e.g. sensorimotor) that can boost or attenuate overshooting representations in accordance with biased neural competition (Desimone & Duncan, 1995).  相似文献   

18.
19.
Invariant representations of stimulus features are thought to play an important role in producing stable percepts of objects. In the present study, we assess the invariance of neural representations of tactile motion direction with respect to other stimulus properties. To this end, we record the responses evoked in individual neurons in somatosensory cortex of primates, including areas 3b, 1, and 2, by three types of motion stimuli, namely scanned bars and dot patterns, and random dot displays, presented to the fingertips of macaque monkeys. We identify a population of neurons in area 1 that is highly sensitive to the direction of stimulus motion and whose motion signals are invariant across stimulus types and conditions. The motion signals conveyed by individual neurons in area 1 can account for the ability of human observers to discriminate the direction of motion of these stimuli, as measured in paired psychophysical experiments. We conclude that area 1 contains a robust representation of motion and discuss similarities in the neural mechanisms of visual and tactile motion processing.  相似文献   

20.
This paper proposes a new neural network model for visual motion detection. The model can well explain both psychophysical findings (the changes of displacement thresholds with stimulus velocity and the perception of apparent motion) and neurophysiological findings (the selectivity for the direction and the velocity of a moving stimulus). To confirm the behavior of the model, numerical examinations were conducted. The results were consistent with both psychophysical and neurophysiological findings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号