首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
 Composite stimulation techniques are presented here which are based on a soft (i.e., slow and mild) reset. They effectively desynchronize a cluster of globally coupled phase oscillators in the presence of noise. A composite stimulus contains two qualitatively different stimuli. The first stimulus is either a periodic pulse train or a smooth, sinusoidal periodic stimulus with an entraining frequency close to the cluster's natural frequency. In the course of several periods of the entrainment, the cluster's dynamics is reset (restarted), independently of its initial dynamic state. The second stimulus, a single pulse, is administered with a fixed delay after the first stimulus in order to desynchronize the cluster by hitting it in a vulnerable state. The incoherent state is unstable, and thus the desynchronized cluster starts to resynchronize. Nevertheless, resynchronization can effectively be blocked by repeatedly delivering the same composite stimulus. Previously designed stimulation techniques essentially rely on a hard (i.e., abrupt) reset. With the composite stimulation techniques based on a soft reset, an effective desynchronization can be achieved even if strong, quickly resetting stimuli are not available or not tolerated. Accordingly, the soft methods are very promising for applications in biology and medicine requiring mild stimulation. In particular, it can be applied to effectively maintain incoherency in a population of oscillatory neurons which try to synchronize their firing. Accordingly, it is explained how to use the soft techniques for (i) an improved, milder, and demand-controlled deep brain stimulation for patients with Parkinson's disease or essential tremor, and for (ii) selectively blocking gamma activity in order to manipulate visual binding. Received: 3 July 2001 / Accepted in revised form: 7 February 2002  相似文献   

2.
The reasons for using natural stimuli to study sensory function are quickly mounting, as recent studies have revealed important differences in neural responses to natural and artificial stimuli. However, natural stimuli typically contain strong correlations and are spherically asymmetric (i.e. stimulus intensities are not symmetrically distributed around the mean), and these statistical complexities can bias receptive field (RF) estimates when standard techniques such as spike-triggered averaging or reverse correlation are used. While a number of approaches have been developed to explicitly correct the bias due to stimulus correlations, there is no complementary technique to correct the bias due to stimulus asymmetries. Here, we develop a method for RF estimation that corrects reverse correlation RF estimates for the spherical asymmetries present in natural stimuli. Using simulated neural responses, we demonstrate how stimulus asymmetries can bias reverse-correlation RF estimates (even for uncorrelated stimuli) and illustrate how this bias can be removed by explicit correction. We demonstrate the utility of the asymmetry correction method under experimental conditions by estimating RFs from the responses of retinal ganglion cells to natural stimuli and using these RFs to predict responses to novel stimuli.  相似文献   

3.
 The operation of a hierarchical competitive network model (VisNet) of invariance learning in the visual system is investigated to determine how this class of architecture can solve problems that require the spatial binding of features. First, we show that VisNet neurons can be trained to provide transform-invariant discriminative responses to stimuli which are composed of the same basic alphabet of features, where no single stimulus contains a unique feature not shared by any other stimulus. The investigation shows that the network can discriminate stimuli consisting of sets of features which are subsets or supersets of each other. Second, a key feature-binding issue we address is how invariant representations of low-order combinations of features in the early layers of the visual system are able to uniquely specify the correct spatial arrangement of features in the overall stimulus and ensure correct stimulus identification in the output layer. We show that output layer neurons can learn new stimuli if the lower layers are trained solely through exposure to simpler feature combinations from which the new stimuli are composed. Moreover, we show that after training on the low-order feature combinations which are common to many objects, this architecture can – after training with a whole stimulus in some locations – generalise correctly to the same stimulus when it is shown in a new location. We conclude that this type of hierarchical model can solve feature-binding problems to produce correct invariant identification of whole stimuli. Received: 4 August 1999 / Accepted in revised form: 11 October 2000  相似文献   

4.
Selection tasks in which simple stimuli (e.g. letters) are presented and a target stimulus has to be selected against one or more distractor stimuli are frequently used in the research on human action control. One important question in these settings is how distractor stimuli, competing with the target stimulus for a response, influence actions. The distractor-response binding paradigm can be used to investigate this influence. It is particular useful to separately analyze response retrieval and distractor inhibition effects. Computer-based experiments are used to collect the data (reaction times and error rates). In a number of sequentially presented pairs of stimulus arrays (prime-probe design), participants respond to targets while ignoring distractor stimuli. Importantly, the factors response relation in the arrays of each pair (repetition vs. change) and distractor relation (repetition vs. change) are varied orthogonally. The repetition of the same distractor then has a different effect depending on response relation (repetition vs. change) between arrays. This result pattern can be explained by response retrieval due to distractor repetition. In addition, distractor inhibition effects are indicated by a general advantage due to distractor repetition. The described paradigm has proven useful to determine relevant parameters for response retrieval effects on human action.  相似文献   

5.

Background

In visual psychophysics, precise display timing, particularly for brief stimulus presentations, is often required. The aim of this study was to systematically review the commonly applied methods for the computation of stimulus durations in psychophysical experiments and to contrast them with the true luminance signals of stimuli on computer displays.

Methodology/Principal Findings

In a first step, we systematically scanned the citation index Web of Science for studies with experiments with stimulus presentations for brief durations. Articles which appeared between 2003 and 2009 in three different journals were taken into account if they contained experiments with stimuli presented for less than 50 milliseconds. The 79 articles that matched these criteria were reviewed for their method of calculating stimulus durations. For those 75 studies where the method was either given or could be inferred, stimulus durations were calculated by the sum of frames (SOF) method. In a second step, we describe the luminance signal properties of the two monitor technologies which were used in the reviewed studies, namely cathode ray tube (CRT) and liquid crystal display (LCD) monitors. We show that SOF is inappropriate for brief stimulus presentations on both of these technologies. In extreme cases, SOF specifications and true stimulus durations are even unrelated. Furthermore, the luminance signals of the two monitor technologies are so fundamentally different that the duration of briefly presented stimuli cannot be calculated by a single method for both technologies. Statistics over stimulus durations given in the reviewed studies are discussed with respect to different duration calculation methods.

Conclusions/Significance

The SOF method for duration specification which was clearly dominating in the reviewed studies leads to serious misspecifications particularly for brief stimulus presentations. We strongly discourage its use for brief stimulus presentations on CRT and LCD monitors.  相似文献   

6.
It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs.  相似文献   

7.
When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.  相似文献   

8.
It has long been assumed that bees cannot see red. However, bees visit red flowers, and the visual spectral sensitivity of bees extends into wavelengths to provide sensitivity to such flowers. We thus investigated whether bees can discriminate stimuli reflecting wavelengths above 560 nm, i.e., which appear orange and red to a human observer. Flowers do not reflect monochromatic (single wavelength) light; specifically orange and red flowers have reflectance patterns which are step functions, we thus used colored stimuli with such reflectance patterns. We first conditioned honey bees Apis mellifera to detect six stimuli reflecting light mostly above 560 nm and found that bees learned to detect only stimuli which were perceptually very different from a bee achromatic background. In a second experiment we conditioned bees to discriminate stimuli from a salient, negative (un-rewarded) yellow stimulus. In subsequent unrewarded tests we presented the bees with the trained situation and with five other tests in which the trained stimulus was presented against a novel one. We found that bees learned to discriminate the positive from the negative stimulus, and could unambiguously discriminate eight out of fifteen stimulus pairs. The performance of bees was positively correlated with differences between the trained and the novel stimulus in the receptor contrast for the long-wavelength bee photoreceptor and in the color distance (calculated using two models of the honeybee colors space). We found that the differential conditioning resulted in a concurrent inhibitory conditioning of the negative stimulus, which might have improved discrimination of stimuli which are perceptually similar. These results show that bees can detect long wavelength stimuli which appear reddish to a human observer. The mechanisms underlying discrimination of these stimuli are discussed. Handling Editor: Lars Chittka.  相似文献   

9.
Acoustic and visual signals are commonly used by fishes for communication. A significant drawback to both types of signals is that both sounds and visual stimuli are easily detected by illegitimate receivers, such as predators. Although predator attraction to visual stimuli has been well-studied in other animals, predator response to acoustic stimuli has received virtually no research attention among fishes and snakes. This study assessed whether the calls of male tricolor shiner (Cyprinella trichroistia) made during the breeding season would attract potential predators. We also examined the effect of visual stimulus of tricolor shiners on predators. Predators used were red eye bass (Micropterus coosae) and midland water snakes (Nerodia sipedon pleuralis). Neither predator was attracted to tricolor sounds when presented alone. Micropterus coosae responded significantly more to a visual stimulus, and to a combination of visual and acoustic stimuli, but with no greater intensity in the latter. Nerodia sipedon pleuralis did not responded to visual stimulus presented alone, but did respond to visual and acoustic stimuli presented simultaneously, and with greater intensity to the latter, indicating that acoustic signals may play a role in prey detection by N. sipedon pleuralis.  相似文献   

10.
To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.  相似文献   

11.
In order to determine precisely the location of a tactile stimulus presented to the hand it is necessary to know not only which part of the body has been stimulated, but also where that part of the body lies in space. This involves the multisensory integration of visual, tactile, proprioceptive, and even auditory cues regarding limb position. In recent years, researchers have become increasingly interested in the question of how these various sensory cues are weighted and integrated in order to enable people to localize tactile stimuli, as well as to give rise to the 'felt' position of our limbs, and ultimately the multisensory representation of 3-D peripersonal space. We highlight recent research on this topic using the crossmodal congruency task, in which participants make speeded elevation discrimination responses to vibrotactile targets presented to the thumb or index finger, while simultaneously trying to ignore irrelevant visual distractors presented from either the same (i.e., congruent) or a different (i.e., incongruent) elevation. Crossmodal congruency effects (calculated as performance on incongruent-congruent trials) are greatest when visual and vibrotactile stimuli are presented from the same azimuthal location, thus providing an index of common position across different sensory modalities. The crossmodal congruency task has been used to investigate a number of questions related to the representation of space in both normal participants and brain-damaged patients. In this review, we detail the major findings from this research, and highlight areas of convergence with other cognitive neuroscience disciplines.  相似文献   

12.
Vection is an illusory perception of self-motion that can occur when visual motion fills the majority of the visual field. This study examines the effect of the duration of visual field movement (VFM) on the perceived strength of self-motion using an inertial nulling (IN) and a magnitude estimation technique based on the certainty that motion occurred (certainty estimation, CE). These techniques were then used to investigate the association between migraine diagnosis and the strength of perceived vection. Visual star-field stimuli consistent with either looming or receding motion were presented for 1, 4, 8 or 16s. Subjects reported the perceived direction of self-motion during the final 1s of the stimulus. For the IN method, an inertial nulling motion was delivered during this final 1s of the visual stimulus, and subjects reported the direction of perceived self-motion during this final second. The magnitude of inertial motion was varied adaptively to determine the point of subjective equality (PSE) at which forward or backward responses were equally likely. For the CE trials the same range of VFM was used but without inertial motion and subjects rated their certainty of motion on a scale of 0–100. PSE determined with the IN technique depended on direction and duration of visual motion and the CE technique showed greater certainty of perceived vection with longer VFM duration. A strong correlation between CE and IN techniques was present for the 8s stimulus. There was appreciable between-subject variation in both CE and IN techniques and migraine was associated with significantly increased perception of self-motion by CE and IN at 8 and 16s. Together, these results suggest that vection may be measured by both CE and IN techniques with good correlation. The results also suggest that susceptibility to vection may be higher in subjects with a history of migraine.  相似文献   

13.
Felsen G  Shen YS  Yao H  Spor G  Li C  Dan Y 《Neuron》2002,36(5):945-954
Receptive field properties of visual cortical neurons depend on the spatiotemporal context within which the stimuli are presented. We have examined the temporal context dependence of cortical orientation tuning using dynamic visual stimuli with rapidly changing orientations. We found that tuning to the orientation of the test stimulus depended on a briefly presented preceding stimulus, with the preferred orientation shifting away from the preceding orientation. Analyses of the spatial-phase dependence of the shift showed that the effect cannot be explained by purely feedforward mechanisms, but can be accounted for by activity-dependent changes in the recurrent interactions between different orientation columns. Thus, short-term plasticity of the intracortical circuit can mediate dynamic modification of orientation tuning, which may be important for efficient visual coding.  相似文献   

14.
Rapid integration of biologically relevant information is crucial for the survival of an organism. Most prominently, humans should be biased to attend and respond to looming stimuli that signal approaching danger (e.g. predator) and hence require rapid action. This psychophysics study used binocular rivalry to investigate the perceptual advantage of looming (relative to receding) visual signals (i.e. looming bias) and how this bias can be influenced by concurrent auditory looming/receding stimuli and the statistical structure of the auditory and visual signals.Subjects were dichoptically presented with looming/receding visual stimuli that were paired with looming or receding sounds. The visual signals conformed to two different statistical structures: (1) a ‘simple’ random-dot kinematogram showing a starfield and (2) a “naturalistic” visual Shepard stimulus. Likewise, the looming/receding sound was (1) a simple amplitude- and frequency-modulated (AM-FM) tone or (2) a complex Shepard tone. Our results show that the perceptual looming bias (i.e. the increase in dominance times for looming versus receding percepts) is amplified by looming sounds, yet reduced and even converted into a receding bias by receding sounds. Moreover, the influence of looming/receding sounds on the visual looming bias depends on the statistical structure of both the visual and auditory signals. It is enhanced when audiovisual signals are Shepard stimuli.In conclusion, visual perception prioritizes processing of biologically significant looming stimuli especially when paired with looming auditory signals. Critically, these audiovisual interactions are amplified for statistically complex signals that are more naturalistic and known to engage neural processing at multiple levels of the cortical hierarchy.  相似文献   

15.
16.
《IRBM》2022,43(6):621-627
Objective: Steady-State Visual Evoked Potentials based Brain-Computer Interfaces (SSVEP-based BCIs) systems have been shown as promising technology due to their short response time and ease of use. SSVEP-based BCIs use brain responses to a flickering visual stimulus as an input command to an external application or device, and it can be influenced by stimulus properties, signal recording, and signal processing. We aim to investigate the system performance varying the stimuli spatial proximity (a stimulus property).Material and methods: We performed a comparative analysis of two visual interface designs (named cross and square) for an SSVEP-based BCI. The power spectrum density (PSD) was used as feature extraction and the Support Machine Vector (SVM) as classification method. We also analyzed the effects of five flickering frequencies (6.67, 8.57, 10, 12 e 15 Hz) between and within interfaces.Results: We found higher accuracy rates for the flickering frequencies of 10, 12, and 15 Hz. The stimulus of 10 Hz presented the highest SSVEP amplitude response for both interfaces. The system presented the best performance (highest classification accuracy and information transfer rate) using the cross interface (lower visual angle).Conclusion: Our findings suggest that the system has the highest performance in the spatial proximity range from 4° to 13° (visual angle). In addition, we conclude that as the stimulus spatial proximity increases, the interference from other stimuli reduces, and the SSVEP amplitude response decreases, which reduces system accuracy. The inter-stimulus distance is a visual interface parameter that must be chosen carefully to increase the efficiency of an SSVEP-based BCI.  相似文献   

17.
A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.  相似文献   

18.
Mechanisms of explicit object recognition are often difficult to investigate and require stimuli with controlled features whose expression can be manipulated in a precise quantitative fashion. Here, we developed a novel method (called "Dots"), for generating visual stimuli, which is based on the progressive deformation of a regular lattice of dots, driven by local contour information from images of objects. By applying progressively larger deformation to the lattice, the latter conveys progressively more information about the target object. Stimuli generated with the presented method enable a precise control of object-related information content while preserving low-level image statistics, globally, and affecting them only little, locally. We show that such stimuli are useful for investigating object recognition under a naturalistic setting--free visual exploration--enabling a clear dissociation between object detection and explicit recognition. Using the introduced stimuli, we show that top-down modulation induced by previous exposure to target objects can greatly influence perceptual decisions, lowering perceptual thresholds not only for object recognition but also for object detection (visual hysteresis). Visual hysteresis is target-specific, its expression and magnitude depending on the identity of individual objects. Relying on the particular features of dot stimuli and on eye-tracking measurements, we further demonstrate that top-down processes guide visual exploration, controlling how visual information is integrated by successive fixations. Prior knowledge about objects can guide saccades/fixations to sample locations that are supposed to be highly informative, even when the actual information is missing from those locations in the stimulus. The duration of individual fixations is modulated by the novelty and difficulty of the stimulus, likely reflecting cognitive demand.  相似文献   

19.
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.  相似文献   

20.
Studies of the visual system suggest that, at an early stage of form processing, a stimulus is represented as a set of contours and that a critical feature of these local contours is their orientation. Here, we characterize the ability of human observers to identify or discriminate the orientation of bars and edges presented to the distal fingerpad. The experiments were performed using a 400-probe stimulator that allowed us to flexibly deliver stimuli across a wide range of conditions. Orientation thresholds, approximately 20 degrees on average, varied only slightly across modes of stimulus presentation (scanned or indented), stimulus amplitudes, scanning speeds, and different stimulus types (bars or edges). The tactile orientation acuity was found to be poorer than its visual counterpart for stimuli of similar aspect ratio, contrast, and size. This result stands in contrast to the equivalent spatial acuity of the two systems (at the limit set by peripheral innervation density) and to the results of studies of tactile and visual letter recognition, which show that the two modalities yield comparable performance when stimuli are scaled appropriately.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号