首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form.  相似文献   

2.
Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.  相似文献   

3.
Ritual wailing performed during funerals provides Warao women with a vehicle for individual and collective expression and a crucial point of access to political processes. When asked about the significance of these musical and texted laments, women emphasize the importance of crying "right alongside each other." This article examines the musical and poetic elements that enable wailers to produce a collective discourse while retaining the distinctiveness of individual voices. I argue that the polyphonic and intertextual character of laments plays an essential role in the cultural construction of women's social power; specifically, these performance dynamics engender special forms of subjectivity that enable women to produce a discourse whose "truth" and "strength" resist reappropriation.  相似文献   

4.
Dance and music often co-occur as evidenced when viewing choreographed dances or singers moving while performing. This study investigated how the viewing of dance motions shapes sound perception. Previous research has shown that dance reflects the temporal structure of its accompanying music, communicating musical meter (i.e. a hierarchical organization of beats) via coordinated movement patterns that indicate where strong and weak beats occur. Experiments here investigated the effects of dance cues on meter perception, hypothesizing that dance could embody the musical meter, thereby shaping participant reaction times (RTs) to sound targets occurring at different metrical positions.In experiment 1, participants viewed a video with dance choreography indicating 4/4 meter (dance condition) or a series of color changes repeated in sequences of four to indicate 4/4 meter (picture condition). A sound track accompanied these videos and participants reacted to timbre targets at different metrical positions. Participants had the slowest RT’s at the strongest beats in the dance condition only. In experiment 2, participants viewed the choreography of the horse-riding dance from Psy’s “Gangnam Style” in order to examine how a familiar dance might affect meter perception. Moreover, participants in this experiment were divided into a group with experience dancing this choreography and a group without experience. Results again showed slower RTs to stronger metrical positions and the group with experience demonstrated a more refined perception of metrical hierarchy. Results likely stem from the temporally selective division of attention between auditory and visual domains. This study has implications for understanding: 1) the impact of splitting attention among different sensory modalities, and 2) the impact of embodiment, on perception of musical meter. Viewing dance may interfere with sound processing, particularly at critical metrical positions, but embodied familiarity with dance choreography may facilitate meter awareness. Results shed light on the processing of multimedia environments.  相似文献   

5.
Blind individuals often demonstrate enhanced nonvisual perceptual abilities. However, the neural substrate that underlies this improved performance remains to be fully understood. An earlier behavioral study demonstrated that some early-blind people localize sounds more accurately than sighted controls using monaural cues. In order to investigate the neural basis of these behavioral differences in humans, we carried out functional imaging studies using positron emission tomography and a speaker array that permitted pseudo-free-field presentations within the scanner. During binaural sound localization, a sighted control group showed decreased cerebral blood flow in the occipital lobe, which was not seen in early-blind individuals. During monaural sound localization (one ear plugged), the subgroup of early-blind subjects who were behaviorally superior at sound localization displayed two activation foci in the occipital cortex. This effect was not seen in blind persons who did not have superior monaural sound localization abilities, nor in sighted individuals. The degree of activation of one of these foci was strongly correlated with sound localization accuracy across the entire group of blind subjects. The results show that those blind persons who perform better than sighted persons recruit occipital areas to carry out auditory localization under monaural conditions. We therefore conclude that computations carried out in the occipital cortex specifically underlie the enhanced capacity to use monaural cues. Our findings shed light not only on intermodal compensatory mechanisms, but also on individual differences in these mechanisms and on inhibitory patterns that differ between sighted individuals and those deprived of vision early in life.  相似文献   

6.
The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system.  相似文献   

7.
We examined the effects of visual deprivation at birth on the development of the corpus callosum in a large group of congenitally blind individuals. We acquired high-resolution T1-weighted MRI scans in 28 congenitally blind and 28 normal sighted subjects matched for age and gender. There was no overall group effect of visual deprivation on the total surface area of the corpus callosum. However, subdividing the corpus callosum into five subdivisions revealed significant regional changes in its three most posterior parts. Compared to the sighted controls, congenitally blind individuals showed a 12% reduction in the splenium, and a 20% increase in the isthmus and the posterior part of the body. A shape analysis further revealed that the bending angle of the corpus callosum was more convex in congenitally blind compared to the sighted control subjects. The observed morphometric changes in the corpus callosum are in line with the well-described cross-modal functional and structural neuroplastic changes in congenital blindness.  相似文献   

8.
Many structural and functional brain alterations accompany blindness, with substantial individual variation in these effects. In normally sighted people, there is correlated individual variation in some visual pathway structures. Here we examined if the changes in brain anatomy produced by blindness alter the patterns of anatomical variation found in the sighted. We derived eight measures of central visual pathway anatomy from a structural image of the brain from 59 sighted and 53 blind people. These measures showed highly significant differences in mean size between the sighted and blind cohorts. When we examined the measurements across individuals within each group we found three clusters of correlated variation, with V1 surface area and pericalcarine volume linked, and independent of the thickness of V1 cortex. These two clusters were in turn relatively independent of the volumes of the optic chiasm and lateral geniculate nucleus. This same pattern of variation in visual pathway anatomy was found in the sighted and the blind. Anatomical changes within these clusters were graded by the timing of onset of blindness, with those subjects with a post-natal onset of blindness having alterations in brain anatomy that were intermediate to those seen in the sighted and congenitally blind. Many of the blind and sighted subjects also contributed functional MRI measures of cross-modal responses within visual cortex, and a diffusion tensor imaging measure of fractional anisotropy within the optic radiations and the splenium of the corpus callosum. We again found group differences between the blind and sighted in these measures. The previously identified clusters of anatomical variation were also found to be differentially related to these additional measures: across subjects, V1 cortical thickness was related to cross-modal activation, and the volume of the optic chiasm and lateral geniculate was related to fractional anisotropy in the visual pathway. Our findings show that several of the structural and functional effects of blindness may be reduced to a smaller set of dimensions. It also seems that the changes in the brain that accompany blindness are on a continuum with normal variation found in the sighted.  相似文献   

9.
Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound''s physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.  相似文献   

10.
The study of blind individuals provides insight into the brain re-organization and behavioral compensations that occur following sensory deprivation. While behavioral studies have yielded conflicting results in terms of performance levels within the remaining senses, deafferentation of visual cortical areas through peripheral blindness results in clear neuroplastic changes. Most striking is the activation of occipital cortex in response to auditory and tactile stimulation. Indeed, parts of the "unimodal" visual cortex are recruited by other sensory modalities to process sensory information in a functionally relevant manner. In addition, a larger area of the sensorimotor cortex is devoted to the representation of the reading finger in blind Braille readers. The "visual" function of the deafferented occipital cortex is also altered, where transcranial magnetic stimulation-induced phosphenes can be elicited in only 20% of blind subjects. The neural mechanisms underlying these changes remain elusive but recent data showing rapid cross-modal plasticity in blindfolded, sighted subjects argue against the establishment of new connections to explain cross-modal interactions in the blind. Rather, latent pathways that participate in multisensory percepts in sighted subjects might be unmasked and may be potentiated in the event of complete loss of visual input. These issues have important implications for the development of visual prosthesis aimed at restoring some degree of vision in the blind.  相似文献   

11.

Background

The loss of vision has been associated with enhanced performance in non-visual tasks such as tactile discrimination and sound localization. Current evidence suggests that these functional gains are linked to the recruitment of the occipital visual cortex for non-visual processing, but the neurophysiological mechanisms underlying these crossmodal changes remain uncertain. One possible explanation is that visual deprivation is associated with an unmasking of non-visual input into visual cortex.

Methodology/Principal Findings

We investigated the effect of sudden, complete and prolonged visual deprivation (five days) in normally sighted adult individuals while they were immersed in an intensive tactile training program. Following the five-day period, blindfolded subjects performed better on a Braille character discrimination task. In the blindfold group, serial fMRI scans revealed an increase in BOLD signal within the occipital cortex in response to tactile stimulation after five days of complete visual deprivation. This increase in signal was no longer present 24 hours after blindfold removal. Finally, reversible disruption of occipital cortex function on the fifth day (by repetitive transcranial magnetic stimulation; rTMS) impaired Braille character recognition ability in the blindfold group but not in non-blindfolded controls. This disruptive effect was no longer evident once the blindfold had been removed for 24 hours.

Conclusions/Significance

Overall, our findings suggest that sudden and complete visual deprivation in normally sighted individuals can lead to profound, but rapidly reversible, neuroplastic changes by which the occipital cortex becomes engaged in processing of non-visual information. The speed and dynamic nature of the observed changes suggests that normally inhibited or masked functions in the sighted are revealed by visual loss. The unmasking of pre-existing connections and shifts in connectivity represent rapid, early plastic changes, which presumably can lead, if sustained and reinforced, to slower developing, but more permanent structural changes, such as the establishment of new neural connections in the blind.  相似文献   

12.
The middle temporal complex (MT/MST) is a brain region specialized for the perception of motion in the visual modality. However, this specialization is modified by visual experience: after long-standing blindness, MT/MST responds to sound. Recent evidence also suggests that the auditory response of MT/MST is selective for motion. The developmental time course of this plasticity is not known. To test for a sensitive period in MT/MST development, we used fMRI to compare MT/MST function in congenitally blind, late-blind, and sighted adults. MT/MST responded to sound in congenitally blind adults, but not in late-blind or sighted adults, and not in an individual who lost his vision between ages of 2 and 3 years. All blind adults had reduced functional connectivity between MT/MST and other visual regions. Functional connectivity was increased between MT/MST and lateral prefrontal areas in congenitally blind relative to sighted and late-blind adults. These data suggest that early blindness affects the function of feedback projections from prefrontal cortex to MT/MST. We conclude that there is a sensitive period for visual specialization in MT/MST. During typical development, early visual experience either maintains or creates a vision-dominated response. Once established, this response profile is not altered by long-standing blindness.  相似文献   

13.
Under certain specific conditions people who are blind have a perception of space that is equivalent to that of sighted individuals. However, in most cases their spatial perception is impaired. Is this simply due to their current lack of access to visual information or does the lack of visual information throughout development prevent the proper integration of the neural systems underlying spatial cognition? Sensory Substitution devices (SSDs) can transfer visual information via other senses and provide a unique tool to examine this question. We hypothesize that the use of our SSD (The EyeCane: a device that translates distance information into sounds and vibrations) can enable blind people to attain a similar performance level as the sighted in a spatial navigation task. We gave fifty-six participants training with the EyeCane. They navigated in real life-size mazes using the EyeCane SSD and in virtual renditions of the same mazes using a virtual-EyeCane. The participants were divided into four groups according to visual experience: congenitally blind, low vision & late blind, blindfolded sighted and sighted visual controls. We found that with the EyeCane participants made fewer errors in the maze, had fewer collisions, and completed the maze in less time on the last session compared to the first. By the third session, participants improved to the point where individual trials were no longer significantly different from the initial performance of the sighted visual group in terms of errors, time and collision.  相似文献   

14.
In order to find out the most suitable and accurate pointing methods to study the sound localizability of persons with visual impairment, we compared the accuracy of three different pointing methods for indicating the direction of sound sources in a semi-anechoic dark room. Six subjects with visual impairment (two totally blind and four with low vision) participated in this experiment. The three pointing methods employed were (1) directing the face, (2) directing the body trunk on a revolving chair and (3) indicating a tactile cue placed horizontally in front of the subject. Seven sound emitters were arranged in a semicircle 2.0 m from the subject, 0 degrees to +/-80 degrees of the subject's midline, at a height of 1.2 m. The accuracy of the pointing methods was evaluated by measuring the deviation between the angle of the target sound source and that of the subject's response. The result was that all methods indicated that as the angle of the sound source increased from midline, the accuracy decreased. The deviations recorded toward the left and the right of midline were symmetrical. In the whole frontal area (-80 degrees to +80 degrees from midline), both the tactile cue and the body trunk methods were more accurate than the face-pointing method. There was no significant difference in the center (-40 degrees to +40 degrees from midline). In the periphery (-80 degrees and +80 degrees ), the tactile cue pointing method was the most accurate of all and the body trunk method was the next best. These results suggest that the most suitable pointing methods to study the sound localizability of the frontal azimuth for subjects who are visually impaired are the tactile cue and the body trunk methods because of their higher accuracy in the periphery.  相似文献   

15.
Photoreception in the mammalian retina is not restricted to rods and cones but extends to a small number of intrinsically photoreceptive retinal ganglion cells (ipRGCs), expressing the photopigment melanopsin. ipRGCs are known to support various accessory visual functions including circadian photoentrainment and pupillary reflexes. However, despite anatomical and physiological evidence that they contribute to the thalamocortical visual projection, no aspect of visual discrimination has been shown to rely upon ipRGCs. Based on their currently known roles, we hypothesized that ipRGCs may contribute to distinguishing brightness. This percept is related to an object's luminance-a photometric measure of light intensity relevant for cone photoreceptors. However, the perceived brightness of different sources is not always predicted by their respective luminance. Here, we used parallel behavioral and electrophysiological experiments to first show that melanopsin contributes to brightness discrimination in both retinally degenerate and fully sighted mice. We continued to use comparable paradigms in psychophysical experiments to provide evidence for a similar role in healthy human subjects. These data represent the first direct evidence that an aspect of visual discrimination in normally sighted subjects can be supported by inner retinal photoreceptors.  相似文献   

16.
Speech processing inherently relies on the perception of specific, rapidly changing spectral and temporal acoustic features. Advanced acoustic perception is also integral to musical expertise, and accordingly several studies have demonstrated a significant relationship between musical training and superior processing of various aspects of speech. Speech and music appear to overlap in spectral and temporal features; however, it remains unclear which of these acoustic features, crucial for speech processing, are most closely associated with musical training. The present study examined the perceptual acuity of musicians to the acoustic components of speech necessary for intra-phonemic discrimination of synthetic syllables. We compared musicians and non-musicians on discrimination thresholds of three synthetic speech syllable continua that varied in their spectral and temporal discrimination demands, specifically voice onset time (VOT) and amplitude envelope cues in the temporal domain. Musicians demonstrated superior discrimination only for syllables that required resolution of temporal cues. Furthermore, performance on the temporal syllable continua positively correlated with the length and intensity of musical training. These findings support one potential mechanism by which musical training may selectively enhance speech perception, namely by reinforcing temporal acuity and/or perception of amplitude rise time, and implications for the translation of musical training to long-term linguistic abilities.  相似文献   

17.
Sight is undoubtedly important for finding and appreciating food, and cooking. Blind individuals are strongly impaired in finding food, limiting the variety of flavours they are exposed to. We have shown before that compared to sighted controls, congenitally blind individuals have enhanced olfactory but reduced taste perception. In this study we tested the hypothesis that congenitally blind subjects have enhanced orthonasal but not retronasal olfactory skills. Twelve congenitally blind and 14 sighted control subjects, matched in age, gender and body mass index, were asked to identify odours using grocery-available food powders. Results showed that blind subjects were significantly faster and tended to be better at identifying odours presented orthonasally. This was not the case when odorants were presented retronasally. We also found a significant group x route interaction, showing that although both groups performed better for retronasally compared to orthonasally presented odours, this gain was less pronounced for blind subjects. Finally, our data revealed that blind subjects were more familiar with the orthonasal odorants and used the retronasal odorants less often for cooking than their sighted counterparts. These results confirm that orthonasal but not retronasal olfactory perception is enhanced in congenital blindness, a result that is concordant with the reduced food variety exposure in this group.  相似文献   

18.

Background

Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs) remains unclear.

Methodology/Principal Findings

We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency.

Conclusions/Significance

Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.  相似文献   

19.
Several studies have shown that blind humans can gather spatial information through echolocation. However, when localizing sound sources, the precedence effect suppresses spatial information of echoes, and thereby conflicts with effective echolocation. This study investigates the interaction of echolocation and echo suppression in terms of discrimination suppression in virtual acoustic space. In the ‘Listening’ experiment, sighted subjects discriminated between positions of a single sound source, the leading or the lagging of two sources, respectively. In the ‘Echolocation’ experiment, the sources were replaced by reflectors. Here, the same subjects evaluated echoes generated in real time from self-produced vocalizations and thereby discriminated between positions of a single reflector, the leading or the lagging of two reflectors, respectively. Two key results were observed. First, sighted subjects can learn to discriminate positions of reflective surfaces echo-acoustically with accuracy comparable to sound source discrimination. Second, in the Listening experiment, the presence of the leading source affected discrimination of lagging sources much more than vice versa. In the Echolocation experiment, however, the presence of both the lead and the lag strongly affected discrimination. These data show that the classically described asymmetry in the perception of leading and lagging sounds is strongly diminished in an echolocation task. Additional control experiments showed that the effect is owing to both the direct sound of the vocalization that precedes the echoes and owing to the fact that the subjects actively vocalize in the echolocation task.  相似文献   

20.
Parkinson''s disease (PD) results in movement and sensory impairments that can be reduced by familiar music. At present, it is unclear whether the beneficial effects of music are limited to lessening the bradykinesia of whole body movement or whether beneficial effects also extend to skilled movements of PD subjects. This question was addressed in the present study in which control and PD subjects were given a skilled reaching task that was performed with and without accompanying preferred musical pieces. Eye movements and limb use were monitored with biomechanical measures and limb movements were additionally assessed using a previously described movement element scoring system. Preferred musical pieces did not lessen limb and hand movement impairments as assessed with either the biomechanical measures or movement element scoring. Nevertheless, the PD patients with more severe motor symptoms as assessed by Hoehn and Yahr (HY) scores displayed enhanced visual engagement of the target and this impairment was reduced during trials performed in association with accompanying preferred musical pieces. The results are discussed in relation to the idea that preferred musical pieces, although not generally beneficial in lessening skilled reaching impairments, may normalize the balance between visual and proprioceptive guidance of skilled reaching.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号