共查询到20条相似文献,搜索用时 15 毫秒
1.
During localization of a moving sound source, a shift of the perceived position relative to the actual one of the starting
point is an expression of the perception of sluggishness of the auditory system. In this study, the human ability to localize
starting points during a gradual or abrupt movement of fused auditory images (FAIs) was compared with the ability to localize
the position of a stationary sound image. Sound images moved from the midline of the head in the direction of each of the
ears. The subject’s responses were recorded using a graphics table. There was a tendency to shift the starting point of the
trajectory in the direction of the movement. This tendency was stronger for gradual rather than for abrupt FAI movement and
for shorter stimuli (100 ms) than for long ones (200 ms). The value of the starting point’s displacement depended on the final
interaural time delay. The results obtained are discussed in terms of the “snapshots” and “movement detector” theories, as
well as in terms of the sluggish and anticipatory ability of auditory perception. 相似文献
2.
Ya. A. Al'tman 《Neurophysiology》1972,4(6):479-485
Single unit activity was investigated in the inferior colliculi of cats anesthetized with chloralose and urethane in experiments simulating sound localization in the presence of direct and reflected signals. In more than half of the neurons with specific angular sensitivity the reflected signal either caused no change in the response to the direct signal or augmented angular sensitivity to the direct signal. In about one-quarter of the neurons studied the special characteristics of the response to a direct signal, located at a certain angle were abolished by a reflected signal, i.e., the reflected signal interfered with assessment of the angle. Changes in the other neurons were more complex: depending on the combination of angles of the direct and reflected signals, no change, augmentation of angular sensitivity, or its diminution could be observed in the same neuron. It must be particularly emphasized that the responses of about half of the neurons studied to a direct signal were changed only if the direct and reflected signals were separated by particular time intervals. This fact suggests that the reflected signal can help to determine the distance of the sound source from the animal.I. P. Pavlov Institute of Physiology, Academy of Sciences of the USSR, Leningrad. Translated from Neirofiziologiya, Vol. 4, No. 6, pp. 621–628, November–December, 1972. 相似文献
3.
Petropavlovskaia EA Shestopalova LB Vaĭtulevich SF 《Zhurnal vysshe? nervno? deiatelnosti imeni I P Pavlova》2011,61(3):293-305
The ability to localize endpoints of sound image trajectories was studied in comparison with stationary sound image positions. Sound images moved either gradually or abruptly to the left or right from the head midline. Different types of sound image movement were simulated by manipulating the interaural time delay. Subjects were asked to estimate the position of the virtual sound source, using the graphic tablet. It was revealed that the perceived endpoints of the moving sound image trajectories, like stationary stimulus positions, depended on the interaural time delay. The perceived endpoints of the moving sound images simulated by stimuli with the final interaural time delay lower than 200 micros were displaced further from the head midline as compared to stationary stimuli of the same interaural time delays. This forward displacement of the perceived position of the moving target can be considered as "representational momentum" and can be explained by mental extrapolation of the dynamic information, which is necessary for successive sensorimotor coordination. For interaural time delays above 400 micros, final positions of gradually and abruptly moving sound sources were closer to the head midline than corresponding stationary sound image position. When comparing the results of both duration conditions, it was shown that in case of longer stimuli the endpoints of gradually moving sound images were lateralized further from the head midline for interaural time delays above 400 micros. 相似文献
4.
Al'tman IaA Bekhterev NN Vaĭtulevich SF Nikitin NI 《Rossi?skii fiziologicheski? zhurnal imeni I.M. Sechenova / Rossi?skaia akademiia nauk》2003,89(3):271-279
The work presents experimental data on certain changes in electrical responses of the auditory system's midbrain centre in a contraphasic binaural presentation of sound impulse series. Neuronal cortical activity is selective in respect to dynamic interaural changes of signals' phasic spectre which may serve as a basis for the mechanisms of localising a moving source of sound. Human auditory evoked potentials reveal a manifestation of memorizing the auditory image movement direction as shown by appearance of stimuli deviant from standard mismatch negativity. 相似文献
5.
Ia A Al'tman 《Zhurnal evoliutsionno? biokhimii i fiziologii》1990,26(6):757-764
In patients with epileptic lesions in the cortex and mediobasal structures of the brain, studies have been made on the perception of spatial position of sound images during dichotic stimulation. It was established that the extreme interval which is necessary for formation of sensation of the moving sound image increases during right-side lesions of the temporal cortex. During left-side lesion of the temporal lobe, more diffuse disturbances in the trajectory of image movement (from the right and left) are observed, whereas right-side lesions result in disturbances of movement only at the opposite side of the latter. Cortical lesions and those in the mediobasal parts of the temporal lobe are accompanied by identical gradient of disturbances in the trajectory of sound image movement and short-term imprinting of succession of signals which differ with respect to their spatial position. Maximum disturbances are observed during lesions in the cortical and mediobasal parts of the temporal lobe, whereas only cortical lesions or only hippocampal lesions result in less significant disturbances. It is suggested that combined activity of the auditory cortex and hippocamp is necessary for localization of a sound source. 相似文献
6.
7.
J M Harrison 《Federation proceedings》1974,33(8):1901-1903
8.
As auditory genes and deafness-associated mutations are discovered at a rapid rate, exciting opportunities have arisen to uncover the molecular mechanisms underlying hearing and hearing impairment. Single genes have been identified to be pathogenic for dominant or recessive forms of nonsyndromic hearing loss, syndromic hearing loss, and, in some cases, even multiple forms of hearing loss. Modifier loci and genes have been found, and investigations into their role in the hearing process will yield valuable insight into the fundamental processes of the auditory system. 相似文献
9.
In cortical areas direction-specific receptive fields occur systematically. Direction specifity is based on unsymmetric coupling of neurons. Such a coupling allows an exact localization of moved stimuli. For this task, the asymmetry in the time domain is compensated for by a spatial asymmetry.This research was supported by DFG grant Se251/9. Professor W. von Seelen was in charge of the project 相似文献
10.
Our brain continuously receives complex combinations of sounds originating from different sources and relating to different events in the external world. Timing differences between the two ears can be used to localize sounds in space, but only when the inputs to the two ears have similar spectrotemporal profiles (high binaural coherence). We used fMRI to investigate any modulation of auditory responses by binaural coherence. We assessed how processing of these cues depends on whether spatial information is task relevant and whether brain activity correlates with subjects' localization performance. We found that activity in Heschl's gyrus increased with increasing coherence, irrespective of whether localization was task relevant. Posterior auditory regions also showed increased activity for high coherence, primarily when sound localization was required and subjects successfully localized sounds. We conclude that binaural coherence cues are processed throughout the auditory cortex and that these cues are used in posterior regions for successful auditory localization. 相似文献
11.
Short-latency auditory evoked potentials during change in the physical parameter of a sound stimulus
A. S. Khachunts L. G. Vaganyan R. A. Bagdasaryan N. E. Tatevosyan I. G. Tatevosyan E. G. Kostanyan K. A. Manasyan R. N. Bilyan 《Human physiology》2000,26(3):290-295
The amplitude-temporal and spectral characteristics of the short-latency auditory evoked potentials (SLAEP) recorded under conditions of monoaural stimulation with sound clicks with initial phase of rarefaction followed by compression and alteration, with the intensity of 60 dB and frequency of 11.1 Hz, were studied in ipsi- and contralateral derivations. Substantial changes in SLAEP morphology in response to polarity inversion of the acoustic stimulus were found. Waves II, IV, VI, and VII changed to the greatest extent. The spectral analysis detected three main SLAEP components: low- (LF), medium- (MF), and high-frequency (HF) components as well as the respective frequency bands. Change in the click phase from rarefaction to compression resulted in bilateral redistribution of power between the MF and HF components. This was expressed as a decrease in the HF peak power and simultaneous rise of MF power. Selective effects of the polarity inversion of the sound stimulus on the MF and HF components support the finding that the activity of SLAEP-generating structures are mainly reflected in the mentioned components. It is suggested that two populations of phase-sensitive units are represented in the auditory analyzer. These populations determine the characteristic changes in SLAEP morphology and spectral characteristics. 相似文献
12.
13.
14.
15.
16.
17.
Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination) in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal. 相似文献
18.
19.
Segmenting the complex acoustic mixture that makes a typical auditory scene into relevant perceptual objects is one of the main challenges of the auditory system [1], for both human and nonhuman species. Several recent studies indicate that perceptual auditory object formation, or "streaming," may be based on neural activity within the auditory cortex and beyond [2, 3]. Here, we find that scene analysis starts much earlier in the auditory pathways. Single units were recorded from a peripheral structure of the mammalian auditory brainstem, the cochlear nucleus. Peripheral responses were similar to cortical responses and displayed all of the functional properties required for streaming, including multisecond adaptation. Behavioral streaming was also measured in human listeners. Neurometric functions derived from the peripheral responses predicted accurately behavioral streaming. This reveals that subcortical structures may already contribute to the analysis of auditory scenes. This finding is consistent with the observation that species lacking a neocortex can still achieve and benefit from behavioral streaming [4]. For humans, we argue that auditory scene analysis of complex scenes is probably based on interactions between subcortical and cortical neural processes, with the relative contribution of each stage depending on the nature of the acoustic cues forming the streams. 相似文献