首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Faces are an important and unique class of visual stimuli, and have been of interest to neuroscientists for many years. Faces are known to elicit certain characteristic behavioral markers, collectively labeled “holistic processing”, while non-face objects are not processed holistically. However, little is known about the underlying neural mechanisms. The main aim of this computational simulation work is to investigate the neural mechanisms that make face processing holistic. Using a model of primate visual processing, we show that a single key factor, “neural tuning size”, is able to account for three important markers of holistic face processing: the Composite Face Effect (CFE), Face Inversion Effect (FIE) and Whole-Part Effect (WPE). Our proof-of-principle specifies the precise neurophysiological property that corresponds to the poorly-understood notion of holism, and shows that this one neural property controls three classic behavioral markers of holism. Our work is consistent with neurophysiological evidence, and makes further testable predictions. Overall, we provide a parsimonious account of holistic face processing, connecting computation, behavior and neurophysiology.  相似文献   

2.
Visual saliency is a fundamental yet hard to define property of objects or locations in the visual world. In a context where objects and their representations compete to dominate our perception, saliency can be thought of as the "juice" that makes objects win the race. It is often assumed that saliency is extracted and represented in an explicit saliency map, which serves to determine the location of spatial attention at any given time. It is then by drawing attention to a salient object that it can be recognized or categorized. I argue against this classical view that visual "bottom-up" saliency automatically recruits the attentional system prior to object recognition. A number of visual processing tasks are clearly performed too fast for such a costly strategy to be employed. Rather, visual attention could simply act by biasing a saliency-based object recognition system. Under natural conditions of stimulation, saliency can be represented implicitly throughout the ventral visual pathway, independent of any explicit saliency map. At any given level, the most activated cells of the neural population simply represent the most salient locations. The notion of saliency itself grows increasingly complex throughout the system, mostly based on luminance contrast until information reaches visual cortex, gradually incorporating information about features such as orientation or color in primary visual cortex and early extrastriate areas, and finally the identity and behavioral relevance of objects in temporal cortex and beyond. Under these conditions the object that dominates perception, i.e. the object yielding the strongest (or the first) selective neural response, is by definition the one whose features are most "salient"--without the need for any external saliency map. In addition, I suggest that such an implicit representation of saliency can be best encoded in the relative times of the first spikes fired in a given neuronal population. In accordance with our subjective experience that saliency and attention do not modify the appearance of objects, the feed-forward propagation of this first spike wave could serve to trigger saliency-based object recognition outside the realm of awareness, while conscious perceptions could be mediated by the remaining discharges of longer neuronal spike trains.  相似文献   

3.
Attentional selection plays a critical role in conscious perception. When attention is diverted, even salient stimuli fail to reach visual awareness. Attention can be voluntarily directed to a spatial location or a visual feature for facilitating the processing of information relevant to current goals. In everyday situations, attention and awareness are tightly coupled. This has led some to suggest that attention and awareness might be based on a common neural foundation, whereas others argue that they are mediated by distinct mechanisms. A body of evidence shows that visual stimuli can be processed at multiple stages of the visual-processing streams without evoking visual awareness. To illuminate the relationship between visual attention and conscious perception, we investigated whether top-down attention can target and modulate the neural representations of unconsciously processed visual stimuli. Our experiments show that spatial attention can target only consciously perceived stimuli, whereas feature-based attention can modulate the processing of invisible stimuli. The attentional modulation of unconscious signals implies that attention and awareness can be dissociated, challenging a simplistic view of the boundary between conscious and unconscious visual processing.  相似文献   

4.
Perception is often characterized computationally as an inference process in which uncertain or ambiguous sensory inputs are combined with prior expectations. Although behavioral studies have shown that observers can change their prior expectations in the context of a task, robust neural signatures of task-specific priors have been elusive. Here, we analytically derive such signatures under the general assumption that the responses of sensory neurons encode posterior beliefs that combine sensory inputs with task-specific expectations. Specifically, we derive predictions for the task-dependence of correlated neural variability and decision-related signals in sensory neurons. The qualitative aspects of our results are parameter-free and specific to the statistics of each task. The predictions for correlated variability also differ from predictions of classic feedforward models of sensory processing and are therefore a strong test of theories of hierarchical Bayesian inference in the brain. Importantly, we find that Bayesian learning predicts an increase in so-called “differential correlations” as the observer’s internal model learns the stimulus distribution, and the observer’s behavioral performance improves. This stands in contrast to classic feedforward encoding/decoding models of sensory processing, since such correlations are fundamentally information-limiting. We find support for our predictions in data from existing neurophysiological studies across a variety of tasks and brain areas. Finally, we show in simulation how measurements of sensory neural responses can reveal information about a subject’s internal beliefs about the task. Taken together, our results reinterpret task-dependent sources of neural covariability as signatures of Bayesian inference and provide new insights into their cause and their function.  相似文献   

5.
This article proposes a new model to interpret seemingly conflicting evidence concerning the correlation of consciousness and neural processes. Based on an analysis of research of blindsight and subliminal perception, the reorganization of elementary functions and consciousness framework suggests that mental representations consist of functions at several different levels of analysis, including truly localized perceptual elementary functions and perceptual algorithmic modules, which are interconnections of the elementary functions. We suggest that conscious content relates to the ‘top level’ of analysis in a ‘situational algorithmic strategy’ that reflects the general state of an individual. We argue that conscious experience is intrinsically related to representations that are available to guide behaviour. From this perspective, we find that blindsight and subliminal perception can be explained partly by too coarse-grained methodology, and partly by top-down enhancing of representations that normally would not be relevant to action.  相似文献   

6.
We experience the world as a seamless stream of percepts. However, intriguing illusions and recent experiments suggest that the world is not continuously translated into conscious perception. Instead, perception seems to operate in a discrete manner, just like movies appear continuous although they consist of discrete images. To explain how the temporal resolution of human vision can be fast compared to sluggish conscious perception, we propose a novel conceptual framework in which features of objects, such as their color, are quasi-continuously and unconsciously analyzed with high temporal resolution. Like other features, temporal features, such as duration, are coded as quantitative labels. When unconscious processing is “completed,” all features are simultaneously rendered conscious at discrete moments in time, sometimes even hundreds of milliseconds after stimuli were presented.  相似文献   

7.
Transcranial magnetic stimulation (TMS) allows for non-invasive interference with ongoing neural processing. Applied in a chronometric design over early visual cortex (EVC), TMS has proved valuable in indicating at which particular time point EVC must remain unperturbed for (conscious) vision to be established. In the current study, we set out to examine the effect of EVC TMS across a broad range of time points, both before (pre-stimulus) and after (post-stimulus) the onset of symbolic visual stimuli. Behavioral priming studies have shown that the behavioral impact of a visual stimulus can be independent from its conscious perception, suggesting two independent neural signatures. To assess whether TMS-induced suppression of visual awareness can be dissociated from behavioral priming in the temporal domain, we thus implemented three different measures of visual processing, namely performance on a standard visual discrimination task, a subjective rating of stimulus visibility, and a visual priming task. To control for non-neural TMS effects, we performed electrooculographical recordings, placebo TMS (sham), and control site TMS (vertex). Our results suggest that, when considering the appropriate control data, the temporal pattern of EVC TMS disruption on visual discrimination, subjective awareness and behavioral priming are not dissociable. Instead, TMS to EVC disrupts visual perception holistically, both when applied before and after the onset of a visual stimulus. The current findings are discussed in light of their implications on models of visual awareness and (subliminal) priming.  相似文献   

8.
The primary visual cortex (V1) is probably the best characterized area of primate cortex, but whether this region contributes directly to conscious visual experience is controversial. Early neurophysiological and neuroimaging studies found that visual awareness was best correlated with neural activity in extrastriate visual areas, but recent studies have found similarly powerful effects in V1. Lesion and inactivation studies have provided further evidence that V1 might be necessary for conscious perception. Whereas hierarchical models propose that damage to V1 simply disrupts the flow of information to extrastriate areas that are crucial for awareness, interactive models propose that recurrent connections between V1 and higher areas form functional circuits that support awareness. Further investigation into V1 and its interactions with higher areas might uncover fundamental aspects of the neural basis of visual awareness.  相似文献   

9.
Although primary visual cortex (V1 or striate) activity per se is not sufficient for visual apperception (normal conscious visual experiences and conscious functions such as detection, discrimination, and recognition), the same is also true for extrastriate visual areas (such as V2, V3, V4/V8/VO, V5/M5/MST, IT, and GF). In the lack of V1 area, visual signals can still reach several extrastriate parts but appear incapable of generating normal conscious visual experiences. It is scarcely emphasized in the scientific literature that conscious perceptions and representations must have also essential energetic conditions. These energetic conditions are achieved by spatiotemporal networks of dynamic mitochondrial distributions inside neurons. However, the highest density of neurons in neocortex (number of neurons per degree of visual angle) devoted to representing the visual field is found in retinotopic V1. It means that the highest mitochondrial (energetic) activity can be achieved in mitochondrial cytochrome oxidase-rich V1 areas. Thus, V1 bear the highest energy allocation for visual representation.In addition, the conscious perceptions also demand structural conditions, presence of adequate duration of information representation, and synchronized neural processes and/or ‘interactive hierarchical structuralism.’ For visual apperception, various visual areas are involved depending on context such as stimulus characteristics such as color, form/shape, motion, and other features. Here, we focus primarily on V1 where specific mitochondrial-rich retinotopic structures are found; we will concisely discuss V2 where smaller riches of these structures are found. We also point out that residual brain states are not fully reflected in active neural patterns after visual perception. Namely, after visual perception, subliminal residual states are not being reflected in passive neural recording techniques, but require active stimulation to be revealed.  相似文献   

10.
Lightness illusions are fundamental to human perception, and yet why we see them is still the focus of much research. Here we address the question by modelling not human physiology or perception directly as is typically the case but our natural visual world and the need for robust behaviour. Artificial neural networks were trained to predict the reflectance of surfaces in a synthetic ecology consisting of 3-D “dead-leaves” scenes under non-uniform illumination. The networks learned to solve this task accurately and robustly given only ambiguous sense data. In addition—and as a direct consequence of their experience—the networks also made systematic “errors” in their behaviour commensurate with human illusions, which includes brightness contrast and assimilation—although assimilation (specifically White's illusion) only emerged when the virtual ecology included 3-D, as opposed to 2-D scenes. Subtle variations in these illusions, also found in human perception, were observed, such as the asymmetry of brightness contrast. These data suggest that “illusions” arise in humans because (i) natural stimuli are ambiguous, and (ii) this ambiguity is resolved empirically by encoding the statistical relationship between images and scenes in past visual experience. Since resolving stimulus ambiguity is a challenge faced by all visual systems, a corollary of these findings is that human illusions must be experienced by all visual animals regardless of their particular neural machinery. The data also provide a more formal definition of illusion: the condition in which the true source of a stimulus differs from what is its most likely (and thus perceived) source. As such, illusions are not fundamentally different from non-illusory percepts, all being direct manifestations of the statistical relationship between images and scenes.  相似文献   

11.
It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped “glaven”) for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object’s shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions–e.g., the participants’ performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.  相似文献   

12.
Synchronized gamma frequency oscillations in neural networks are thought to be important to sensory information processing, and their effects have been intensively studied. Here we describe a mechanism by which the nervous system can readily control gamma oscillation effects, depending selectively on visual stimuli. Using a model neural network simulation, we found that sensory response in the primary visual cortex is significantly modulated by the resonance between “spontaneous” and “stimulus-driven” oscillations. This gamma resonance can be precisely controlled by the synaptic plasticity of thalamocortical connections, and cortical response is regulated differentially according to the resonance condition. The mechanism produces a selective synchronization between the afferent and downstream neural population. Our simulation results explain experimental observations such as stimulus-dependent synchronization between the thalamus and the cortex at different oscillation frequencies. The model generally shows how sensory information can be selectively routed depending on its frequency components.  相似文献   

13.
It has been hypothesized that neural activities in the primary visual cortex (V1) represent a saliency map of the visual field to exogenously guide attention. This hypothesis has so far provided only qualitative predictions and their confirmations. We report this hypothesis’ first quantitative prediction, derived without free parameters, and its confirmation by human behavioral data. The hypothesis provides a direct link between V1 neural responses to a visual location and the saliency of that location to guide attention exogenously. In a visual input containing many bars, one of them saliently different from all the other bars which are identical to each other, saliency at the singleton’s location can be measured by the shortness of the reaction time in a visual search for singletons. The hypothesis predicts quantitatively the whole distribution of the reaction times to find a singleton unique in color, orientation, and motion direction from the reaction times to find other types of singletons. The prediction matches human reaction time data. A requirement for this successful prediction is a data-motivated assumption that V1 lacks neurons tuned simultaneously to color, orientation, and motion direction of visual inputs. Since evidence suggests that extrastriate cortices do have such neurons, we discuss the possibility that the extrastriate cortices play no role in guiding exogenous attention so that they can be devoted to other functions like visual decoding and endogenous attention.  相似文献   

14.
Chronic pain, including chronic non-specific low back pain (CNSLBP), is often associated with body perception disturbances, but these have generally been assessed under static conditions. The objective of this study was to use a “virtual mirror” that scaled visual movement feedback to assess body perception during active movement in military personnel with CNSLBP (n = 15) as compared to military healthy control subjects (n = 15). Subjects performed a trunk flexion task while sitting and standing in front of a large screen displaying a full-body virtual mirror-image (avatar) in real-time. Avatar movements were scaled to appear greater, identical, or smaller than the subjects’ actual movements. A total of 126 trials with 11 different scaling factors were pseudo-randomized across 6 blocks. After each trial, subjects had to decide whether the avatar’s movements were “greater” or “smaller” than their own movements. Based on this two-alternative forced choice paradigm, a psychophysical curve was fitted to the data for each subject, and several metrics were derived from this curve. In addition, task adherence (kinematics) and virtual reality immersion were assessed. Groups displayed a similar ability to discriminate between different levels of movement scaling. Still, subjects with CNSLBP showed an abnormal performance and tended to overestimate their own movements (a right-shifted psychophysical curve). Subjects showed adequate task adherence, and on average virtual reality immersion was reported to be very good. In conclusion, these results extend previous work in patients with CNSLBP, and denote an important relationship between body perception, movement and pain. As such, the assessment of body perception during active movement can offer new avenues for understanding and managing body perception disturbances and abnormal movement patterns in patients with pain.  相似文献   

15.
A prevailing theory proposes that the brain''s two visual pathways, the ventral and dorsal, lead to differing visual processing and world representations for conscious perception than those for action. Others have claimed that perception and action share much of their visual processing. But which of these two neural architectures is favored by evolution? Successful visual search is life-critical and here we investigate the evolution and optimality of neural mechanisms mediating perception and eye movement actions for visual search in natural images. We implement an approximation to the ideal Bayesian searcher with two separate processing streams, one controlling the eye movements and the other stream determining the perceptual search decisions. We virtually evolved the neural mechanisms of the searchers'' two separate pathways built from linear combinations of primary visual cortex receptive fields (V1) by making the simulated individuals'' probability of survival depend on the perceptual accuracy finding targets in cluttered backgrounds. We find that for a variety of targets, backgrounds, and dependence of target detectability on retinal eccentricity, the mechanisms of the searchers'' two processing streams converge to similar representations showing that mismatches in the mechanisms for perception and eye movements lead to suboptimal search. Three exceptions which resulted in partial or no convergence were a case of an organism for which the targets are equally detectable across the retina, an organism with sufficient time to foveate all possible target locations, and a strict two-pathway model with no interconnections and differential pre-filtering based on parvocellular and magnocellular lateral geniculate cell properties. Thus, similar neural mechanisms for perception and eye movement actions during search are optimal and should be expected from the effects of natural selection on an organism with limited time to search for food that is not equi-detectable across its retina and interconnected perception and action neural pathways.  相似文献   

16.
Rapid integration of biologically relevant information is crucial for the survival of an organism. Most prominently, humans should be biased to attend and respond to looming stimuli that signal approaching danger (e.g. predator) and hence require rapid action. This psychophysics study used binocular rivalry to investigate the perceptual advantage of looming (relative to receding) visual signals (i.e. looming bias) and how this bias can be influenced by concurrent auditory looming/receding stimuli and the statistical structure of the auditory and visual signals.Subjects were dichoptically presented with looming/receding visual stimuli that were paired with looming or receding sounds. The visual signals conformed to two different statistical structures: (1) a ‘simple’ random-dot kinematogram showing a starfield and (2) a “naturalistic” visual Shepard stimulus. Likewise, the looming/receding sound was (1) a simple amplitude- and frequency-modulated (AM-FM) tone or (2) a complex Shepard tone. Our results show that the perceptual looming bias (i.e. the increase in dominance times for looming versus receding percepts) is amplified by looming sounds, yet reduced and even converted into a receding bias by receding sounds. Moreover, the influence of looming/receding sounds on the visual looming bias depends on the statistical structure of both the visual and auditory signals. It is enhanced when audiovisual signals are Shepard stimuli.In conclusion, visual perception prioritizes processing of biologically significant looming stimuli especially when paired with looming auditory signals. Critically, these audiovisual interactions are amplified for statistically complex signals that are more naturalistic and known to engage neural processing at multiple levels of the cortical hierarchy.  相似文献   

17.
Part of the sensory information is processed by our central nervous system without conscious perception. Subconscious processing has been shown to be capable of triggering motor reactions. In the present study, we asked the question whether visual information, which is not consciously perceived, could influence decision-making in a choice reaction task. Ten healthy subjects (28±5 years) executed two different experimental protocols. In the Motor reaction protocol, a visual target cue was shown on a computer screen. Depending on the displayed cue, subjects had to either complete a reaching movement (go-condition) or had to abort the movement (stop-condition). The cue was presented with different display durations (20–160 ms). In the second Verbalization protocol, subjects verbalized what they experienced on the screen. Again, the cue was presented with different display durations. This second protocol tested for conscious perception of the visual cue. The results of this study show that subjects achieved significantly more correct responses in the Motor reaction protocol than in the Verbalization protocol. This difference was only observed at the very short display durations of the visual cue. Since correct responses in the Verbalization protocol required conscious perception of the visual information, our findings imply that the subjects performed correct motor responses to visual cues, which they were not conscious about. It is therefore concluded that humans may reach decisions based on subconscious visual information in a choice reaction task.  相似文献   

18.
Humans have long marveled at the ability of animals to navigate swiftly, accurately, and across long distances. Many mechanisms have been proposed for how animals acquire, store, and retrace learned routes, yet many of these hypotheses appear incongruent with behavioral observations and the animals’ neural constraints. The “Navigation by Scene Familiarity Hypothesis” proposed originally for insect navigation offers an elegantly simple solution for retracing previously experienced routes without the need for complex neural architectures and memory retrieval mechanisms. This hypothesis proposes that an animal can return to a target location by simply moving toward the most familiar scene at any given point. Proof of concept simulations have used computer-generated ant’s-eye views of the world, but here we test the ability of scene familiarity algorithms to navigate training routes across satellite images extracted from Google Maps. We find that Google satellite images are so rich in visual information that familiarity algorithms can be used to retrace even tortuous routes with low-resolution sensors. We discuss the implications of these findings not only for animal navigation but also for the potential development of visual augmentation systems and robot guidance algorithms.  相似文献   

19.
We compared conscious and nonconscious processing of briefly flashed words using a visual masking procedure while recording intracranial electroencephalogram (iEEG) in ten patients. Nonconscious processing of masked words was observed in multiple cortical areas, mostly within an early time window (<300 ms), accompanied by induced gamma-band activity, but without coherent long-distance neural activity, suggesting a quickly dissipating feedforward wave. In contrast, conscious processing of unmasked words was characterized by the convergence of four distinct neurophysiological markers: sustained voltage changes, particularly in prefrontal cortex, large increases in spectral power in the gamma band, increases in long-distance phase synchrony in the beta range, and increases in long-range Granger causality. We argue that all of those measures provide distinct windows into the same distributed state of conscious processing. These results have a direct impact on current theoretical discussions concerning the neural correlates of conscious access.  相似文献   

20.
The perception of a stimulus can be influenced by previous perceptual experience, a phenomenon known as perceptual priming. However, there has been limited investigation on perceptual priming of shape perception of three-dimensional object structures defined by moving dots. Here we examined the perceptual priming of a 3D object shape defined purely by motion-in-depth cues (i.e., Shape-From-Motion, SFM) using a classic prime-target paradigm. The results from the first two experiments revealed a significant increase in accuracy when a “cloudy” SFM stimulus (whose object structure was difficult to recognize due to the presence of strong noise) was preceded by an unambiguous SFM that clearly defined the same transparent 3D shape. In contrast, results from Experiment 3 revealed no change in accuracy when a “cloudy” SFM stimulus was preceded by a static shape or a semantic word that defined the same object shape. Instead, there was a significant decrease in accuracy when preceded by a static shape or a semantic word that defined a different object shape. These results suggested that the perception of a noisy SFM stimulus can be facilitated by a preceding unambiguous SFM stimulus—but not a static image or a semantic stimulus—that defined the same shape. The potential neural and computational mechanisms underlying the difference in priming are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号