首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recently, musical sounds from pre-recorded orchestra sample libraries (OSL) have become indispensable in music production for the stage or popular charts. Surprisingly, it is unknown whether human listeners can identify sounds as stemming from real orchestras or OSLs. Thus, an internet-based experiment was conducted to investigate whether a classic orchestral work, produced with sounds from a state-of-the-art OSL, could be reliably discerned from a live orchestra recording of the piece. It could be shown that the entire sample of listeners (N = 602) on average identified the correct sound source at 72.5%. This rate slightly exceeded Alan Turing''s well-known upper threshold of 70% for a convincing, simulated performance. However, while sound experts tended to correctly identify the sound source, participants with lower listening expertise, who resembled the majority of music consumers, only achieved 68.6%. As non-expert listeners in the experiment were virtually unable to tell the real-life and OSL sounds apart, it is assumed that OSLs will become more common in music production for economic reasons.  相似文献   

2.
It has been shown that humans prefer consonant sounds from the early stages of development. From a comparative psychological perspective, although previous studies have shown that birds and monkeys can discriminate between consonant and dissonant sounds, it remains unclear whether nonhumans have a spontaneous preference for consonant music over dissonant music as humans do. We report here that a five-month-old human-raised chimpanzee (Pan troglodytes) preferred consonant music. The infant chimpanzee consistently preferred to produce, with the aid of our computerized setup, consonant versions of music for a longer duration than dissonant versions. This result suggests that the preference for consonance is not unique to humans. Further, it supports the hypothesis that one major basis of musical appreciation has some evolutionary origins.  相似文献   

3.
Theories of music evolution agree that human music has an affective influence on listeners. Tests of non-humans provided little evidence of preferences for human music. However, prosodic features of speech (‘motherese’) influence affective behaviour of non-verbal infants as well as domestic animals, suggesting that features of music can influence the behaviour of non-human species. We incorporated acoustical characteristics of tamarin affiliation vocalizations and tamarin threat vocalizations into corresponding pieces of music. We compared music composed for tamarins with that composed for humans. Tamarins were generally indifferent to playbacks of human music, but responded with increased arousal to tamarin threat vocalization based music, and with decreased activity and increased calm behaviour to tamarin affective vocalization based music. Affective components in human music may have evolutionary origins in the structure of calls of non-human animals. In addition, animal signals may have evolved to manage the behaviour of listeners by influencing their affective state.  相似文献   

4.
How does music induce or evoke feeling states in listeners? A number of mechanisms have been proposed for how sounds induce emotions, including innate auditory responses, learned associations and mirror neuron processes. Inspired by ethology, it is suggested that the ethological concepts of signals, cues and indices offer additional analytic tools for better understanding induced affect. It is proposed that ethological concepts help explain why music is able to induce only certain emotions, why some induced emotions are similar to the displayed emotion (whereas other induced emotions differ considerably from the displayed emotion), why listeners often report feeling mixed emotions and why only some musical expressions evoke similar responses across cultures.  相似文献   

5.
6.
Psychophysiological evidence suggests that music and language are intimately coupled such that experience/training in one domain can influence processing required in the other domain. While the influence of music on language processing is now well-documented, evidence of language-to-music effects have yet to be firmly established. Here, using a cross-sectional design, we compared the performance of musicians to that of tone-language (Cantonese) speakers on tasks of auditory pitch acuity, music perception, and general cognitive ability (e.g., fluid intelligence, working memory). While musicians demonstrated superior performance on all auditory measures, comparable perceptual enhancements were observed for Cantonese participants, relative to English-speaking nonmusicians. These results provide evidence that tone-language background is associated with higher auditory perceptual performance for music listening. Musicians and Cantonese speakers also showed superior working memory capacity relative to nonmusician controls, suggesting that in addition to basic perceptual enhancements, tone-language background and music training might also be associated with enhanced general cognitive abilities. Our findings support the notion that tone language speakers and musically trained individuals have higher performance than English-speaking listeners for the perceptual-cognitive processing necessary for basic auditory as well as complex music perception. These results illustrate bidirectional influences between the domains of music and language.  相似文献   

7.
Rhythmic grouping and discrimination is fundamental to music. When compared to the perception of pitch, rhythmic abilities in animals have received scant attention until recently. In this experiment, four pigeons were tested with three types of auditory rhythmic discriminations to investigate their processing of this aspect of sound and music. Two experiments examined a meter discrimination in which successively presented idiophonic sounds were repeated in meters of different lengths in a go/no-go discrimination task. With difficulty, the birds eventually learned to discriminate between 8/4 and 3/4 meters constructed from cymbal and tom drum sounds at 180 beats per minute. This discrimination subsequently transferred to faster tempos, but not to different drum sounds or their combination. Experiment 3 tested rhythmic and arrhythmic patterns of sounds. After 40 sessions of training, these same pigeons showed no discrimination. Experiment 4 tested repetitions of a piano sound at fast and slow tempos. This discrimination was readily learned and showed transfer to novel tempos. The pattern of results suggests that pigeons can time periodic auditory events, but their capacity to understand generalized rhythmic groupings appears limited.  相似文献   

8.
For listeners familiar with Western twelve-tone equal-tempered (12-TET) music, a novel microtonal tuning system is expected to present additional processing challenges. We aimed to determine whether this was the case, focusing on the extent to which our perceptions can be considered bottom-up (psychoacoustic and primarily perceptual) and top-down (dependent on familiarity and cognitive processing). We elicited both overt response ratings, and covert event-related potentials (ERPs), so as to compare subjective impressions of sounds with the neurophysiological processing of the acoustic signal. We hypothesised that microtonal intervals are perceived differently from 12-TET intervals, and that the responses of musicians (n = 10) and non-musicians (n = 10) are distinct. Two-note chords were presented comprising 12-TET intervals (consonant and dissonant) or microtonal (quarter tone) intervals, and ERP, subjective roughness ratings, and liking ratings were recorded successively. Musical experience mediated the perception of differences between dissonant and microtone intervals, with non-musicians giving similar ratings for each, and musicians preferring dissonant over the less commonly used microtonal intervals, rating them as less rough. ERP response amplitude was greater for consonant intervals than other intervals. Musical experience interacted with interval type, suggesting that musical expertise facilitates the sensory and perceptual discrimination of microtonal intervals from 12-TET intervals, and an increased ability to categorize such intervals. Non-musicians appear to have perceived microtonal intervals as instances of neighbouring 12-TET intervals.  相似文献   

9.
Anthropogenic sound is increasingly considered a major environmental issue, but its effects are relatively unstudied. Organisms may be directly affected by anthropogenic sound in many ways, including interference with their ability to detect mates, predators, or food, and disturbances that directly affect one organism may in turn have indirect effects on others. Thus, to fully appreciate the net effect of anthropogenic sound, it may be important to consider both direct and indirect effects. We report here on a series of experiments to test the hypothesis that anthropogenic sound can generate cascading indirect effects within a community. We used a study system of lady beetles, soybean aphids, and soybean plants, which are a useful model for studying the direct and indirect effects of global change on food webs. For sound treatments, we used several types of music, as well as a mix of urban sounds (e.g., sirens, vehicles, and construction equipment), each at volumes comparable to a busy city street or farm tractor. In 18‐hr feeding trials, rock music and urban sounds caused lady beetles to consume fewer aphids, but other types of music had no effect even at the same volume. We then tested the effect of rock music on the strength of trophic cascades in a 2‐week experiment in plant growth chambers. When exposed to music by AC/DC, who articulated the null hypothesis that “rock and roll ain't noise pollution” in a song of the same name, lady beetles were less effective predators, resulting in higher aphid density and reduced final plant biomass relative to control (no music) treatments. While it is unclear what characteristics of sound generate these effects, our results reject the AC/DC hypothesis and demonstrate that altered interspecific interactions can transmit the indirect effects of anthropogenic noise through a community.  相似文献   

10.
N. L. Wallin 《Human Evolution》2000,15(3-4):199-242
Musical experience and creativity are regarded to be largely depending on cultural conditions and hence on higher cognitive functions. True as this may be, there are, however, numerous responses to music-the urge to make music taken into account-which derive from deeper levels of the human organism, namely from arousing alternatively moderating vegetative and limbic functions. Although the behavioral intensity and quality emanating from such evolutionary early nervous structures may be affected by cultural influence, they still seem to be essentially independent. Similar specific responses to acoustical and/or motor patterned stimuli are found among some other higher vertebrates which like humans are equipped with sophisticated mechanisms for hearing, sound production and locomotion, well tuned to each other. However, it is even today an open question whether these manifestations of auditive-phonatory-locomotor abilities just are analogues or if they share a common evolutionary background. The current discussion on this matter has accumulated data which apparently support the latter view pointing to that sexual selection would be the common force, first suggested by Charles Darwin (153). Other, and still more recent data in genetics and neuroscience, may be interpreted as hints at that the common origin would be a more elemental organismic feature, a metabolic-homeostatic variable which due to its evolutionary strength eventually created the platform for a radiation of adaptations concerning species-specific patterned sounds and locomotions with a broad spectrum of tasks, among them sexual selection. This line of reasoning is here, under reference to recent biological data, made the basis for a hypothetical model of music as an expression of an early homeostatic feedback mechanism. Accordingly, in music there is a central variable, a “heart” or a “core”, which is not to be found exclusively in music but appears globally as a releasing mechanism for basic endocrine, autonomous and elementally cognitive functions. It is of acoustical or motoric nature, or of a combination of these characters, and is performed in repetitive trains of impulses. It is further assumed that the target of its operations is mainly proteins with a regulatory effect on the cellular and synaptic states. The principal representatives for these proteins are growth factors, especially the NGF which originally was regarded as a growth stimulator within the peripheral and sympathetical systems but which eventually appeared to be also a synergetic modulator of neuro-endocrine-immuno-reactions, i.e. of the three central homeostatic systems (5, 80). One can speculate that this variable is functionally active at an elemental level such that it has escaped to be knocked out by forceful “higher” and evolutionary younger factors (49:13). This hypothesis — that music has its roots within and is a part of a globally occurring natural acoustical-motoric stimulus, manifested in a great variety of auditory and motoric behavior in humans and among some other higher vertebrates — implies that humankind has developed this stimulus into a category of acoustical structures which oscillate round an instable point of equilibrium. Exactly such structures, not stochastic but neither too predictable, affect the organism mainly on a sensory-vegetative level (59, 102, 137, 151). They are in addition perceptionally optimal in creating cortical space-temporal neural patterns with strong interhemispheric coherence (110, 130). According to this scenario, music did not originate from a human need of communication or as an aspect of sexual selection. It emerged from elemental processes within the individual organism with the aspiration to maintain his bodily and mental fitness, thus on a pre-social level. What was beneficial to the single individual in his fight for survival, was good also for the group and its survival. Starting from that platform music has evolved in symbiosis with dance and play within a large spectrum of social functions, where sexual selection and ritual and autonomously aesthetical tasks got a focal role that increased over time and always was accompanied by emotional events. Behind, the ticking in the deep structure of music of this in cultural-ethical terms totally value-neutral archaic mechanism goes on without pause, contributing to the maintenance of an optimal functional balance in body and mind of the individual, and the group as well.  相似文献   

11.
Schaette R  Turtle C  Munro KJ 《PloS one》2012,7(6):e35238
Tinnitus, a phantom auditory sensation, is associated with hearing loss in most cases, but it is unclear if hearing loss causes tinnitus. Phantom auditory sensations can be induced in normal hearing listeners when they experience severe auditory deprivation such as confinement in an anechoic chamber, which can be regarded as somewhat analogous to a profound bilateral hearing loss. As this condition is relatively uncommon among tinnitus patients, induction of phantom sounds by a lesser degree of auditory deprivation could advance our understanding of the mechanisms of tinnitus. In this study, we therefore investigated the reporting of phantom sounds after continuous use of an earplug. 18 healthy volunteers with normal hearing wore a silicone earplug continuously in one ear for 7 days. The attenuation provided by the earplugs simulated a mild high-frequency hearing loss, mean attenuation increased from <10 dB at 0.25 kHz to >30 dB at 3 and 4 kHz. 14 out of 18 participants reported phantom sounds during earplug use. 11 participants presented with stable phantom sounds on day 7 and underwent tinnitus spectrum characterization with the earplug still in place. The spectra showed that the phantom sounds were perceived predominantly as high-pitched, corresponding to the frequency range most affected by the earplug. In all cases, the auditory phantom disappeared when the earplug was removed, indicating a causal relation between auditory deprivation and phantom sounds. This relation matches the predictions of our computational model of tinnitus development, which proposes a possible mechanism by which a stabilization of neuronal activity through homeostatic plasticity in the central auditory system could lead to the development of a neuronal correlate of tinnitus when auditory nerve activity is reduced due to the earplug.  相似文献   

12.
Listeners consistently perceive approaching sounds to be closer than they actually are and perceptually underestimate the time to arrival of looming sound sources. In a natural environment, this underestimation results in more time than expected to evade or engage the source and affords a “margin of safety” that may provide a selective advantage. However, a key component in the proposed evolutionary origins of the perceptual bias is the appropriate timing of anticipatory motor behaviors. Here we show that listeners with poorer physical fitness respond sooner to looming sounds and with a larger margin of safety than listeners with better physical fitness. The anticipatory perceptual bias for looming sounds is negatively correlated with physical strength and positively correlated with recovery heart rate (a measure of aerobic fitness). The results suggest that the auditory perception of looming sounds may be modulated by the response capacity of the motor system.  相似文献   

13.
Perception of complex sound is a process carried out in everyday life situations and contributes in the way one perceives reality. Attempting to explain sound perception and how it affects human beings is complicated. Physics of simple sound can be described as a function of frequency, amplitude and phase. Psychology of sound, also termed psychoacoustics, has its own distinct elements of pitch, intensity and tibre. An interconnection exists between physics and psychology of hearing.Music being a complex sound contributes to communication and conveys information with semantic and emotional elements. These elements indicate the involvement of the central nervous system through processes of integration and interpretation together with peripheral auditory processing.Effects of sound and music in human psychology and physiology are complicated. Psychological influences of listening to different types of music are based on the different characteristics of basic musical sounds. Attempting to explain music perception can be simpler if music is broken down to its basic auditory signals. Perception of auditory signals is analyzed by the science of psychoacoustics. Differences in complex sound perception have been found between normal subjects and psychiatric patients and between different types of psychopathologies.  相似文献   

14.
Whether music was an evolutionary adaptation that conferred survival advantages or a cultural creation has generated much debate. Consistent with an evolutionary hypothesis, music is unique to humans, emerges early in development and is universal across societies. However, the adaptive benefit of music is far from obvious. Music is highly flexible, generative and changes rapidly over time, consistent with a cultural creation hypothesis. In this paper, it is proposed that much of musical pitch and timing structure adapted to preexisting features of auditory processing that evolved for auditory scene analysis (ASA). Thus, music may have emerged initially as a cultural creation made possible by preexisting adaptations for ASA. However, some aspects of music, such as its emotional and social power, may have subsequently proved beneficial for survival and led to adaptations that enhanced musical behaviour. Ontogenetic and phylogenetic evidence is considered in this regard. In particular, enhanced auditory–motor pathways in humans that enable movement entrainment to music and consequent increases in social cohesion, and pathways enabling music to affect reward centres in the brain should be investigated as possible musical adaptations. It is concluded that the origins of music are complex and probably involved exaptation, cultural creation and evolutionary adaptation.  相似文献   

15.
This paper reviews the basic aspects of auditory processing that play a role in the perception of speech. The frequency selectivity of the auditory system, as measured using masking experiments, is described and used to derive the internal representation of the spectrum (the excitation pattern) of speech sounds. The perception of timbre and distinctions in quality between vowels are related to both static and dynamic aspects of the spectra of sounds. The perception of pitch and its role in speech perception are described. Measures of the temporal resolution of the auditory system are described and a model of temporal resolution based on a sliding temporal integrator is outlined. The combined effects of frequency and temporal resolution can be modelled by calculation of the spectro-temporal excitation pattern, which gives good insight into the internal representation of speech sounds. For speech presented in quiet, the resolution of the auditory system in frequency and time usually markedly exceeds the resolution necessary for the identification or discrimination of speech sounds, which partly accounts for the robust nature of speech perception. However, for people with impaired hearing, speech perception is often much less robust.  相似文献   

16.
We investigated music and language processing in a group of early bilinguals who spoke a tone language and a non-tone language (Cantonese and Dutch). We assessed online speech-music processing interactions, that is, interactions that occur when speech and music are processed simultaneously in songs, with a speeded classification task. In this task, participants judged sung pseudowords either musically (based on the direction of the musical interval) or phonologically (based on the identity of the sung vowel). We also assessed longer-term effects of linguistic experience on musical ability, that is, the influence of extensive prior experience with language when processing music. These effects were assessed with a task in which participants had to learn to identify musical intervals and with four pitch-perception tasks. Our hypothesis was that due to their experience in two different languages using lexical versus intonational tone, the early Cantonese-Dutch bilinguals would outperform the Dutch control participants. In online processing, the Cantonese-Dutch bilinguals processed speech and music more holistically than controls. This effect seems to be driven by experience with a tone language, in which integration of segmental and pitch information is fundamental. Regarding longer-term effects of linguistic experience, we found no evidence for a bilingual advantage in either the music-interval learning task or the pitch-perception tasks. Together, these results suggest that being a Cantonese-Dutch bilingual does not have any measurable longer-term effects on pitch and music processing, but does have consequences for how speech and music are processed jointly.  相似文献   

17.
Neural specializations for speech and pitch: moving beyond the dichotomies   总被引:2,自引:0,他引:2  
The idea that speech processing relies on unique, encapsulated, domain-specific mechanisms has been around for some time. Another well-known idea, often espoused as being in opposition to the first proposal, is that processing of speech sounds entails general-purpose neural mechanisms sensitive to the acoustic features that are present in speech. Here, we suggest that these dichotomous views need not be mutually exclusive. Specifically, there is now extensive evidence that spectral and temporal acoustical properties predict the relative specialization of right and left auditory cortices, and that this is a parsimonious way to account not only for the processing of speech sounds, but also for non-speech sounds such as musical tones. We also point out that there is equally compelling evidence that neural responses elicited by speech sounds can differ depending on more abstract, linguistically relevant properties of a stimulus (such as whether it forms part of one's language or not). Tonal languages provide a particularly valuable window to understand the interplay between these processes. The key to reconciling these phenomena probably lies in understanding the interactions between afferent pathways that carry stimulus information, with top-down processing mechanisms that modulate these processes. Although we are still far from the point of having a complete picture, we argue that moving forward will require us to abandon the dichotomy argument in favour of a more integrated approach.  相似文献   

18.
In an earlier study, we found that humans were able to categorize dog barks correctly, which were recorded in various situations. The acoustic parameters, like tonality, pitch and inter-bark time intervals, seemed to have a strong effect on how human listeners described the emotionality of these dog vocalisations. In this study, we investigated if the effect of the acoustic parameters of the dog bark is the same on the human listeners as we would expect it from studies in other mammalian species (for example, low, hoarse sounds indicating aggression; high pitched, tonal sounds indicating subordinance/fear). People with different experience with dogs were asked to describe the emotional content of several artificially assembled bark sequences on the basis of five emotional states (aggressiveness, fear, despair, playfulness, happiness). The selection of the barks was based on low, medium and high values of tonality and peak frequency. For assembling artificial bark sequences, we used short, middle or long inter-bark intervals. We found that humans with different levels of experience with dogs described the emotional content of the bark sequences quite similarly, and the extent of previous experience with the given breed (Mudi), or with dogs in general, did not cause characteristic differences in the emotionality scores. The scoring of the emotional content of the bark sequences was in accordance with the so-called Morton's structural–acoustic rules. Thus, low pitched barks were described as aggressive, and tonal and high pitched barks were scored as either fearful or desperate, but always without aggressiveness. In general, tonality of the bark sequence had much less effect than the pitch of the sounds. We found also that the inter-bark intervals had a strong effect on the emotionality of dog barks for the human listeners: bark sequences with short inter-bark intervals were scored as aggressive, but bark sequences with longer inter-bark intervals were scored with low values of aggression. High pitched bark sequences with long inter-bark intervals were considered happy and playful, independently from their tonality. These findings show that dog barks function as predicted by the structural–motivational rules developed for acoustic signals in other species, suggesting that dog barks may present a functional system for communication at least in the dog–human relationship. In sum it seems that many types of different emotions can be expressed with the variation of at least three acoustic parameters.  相似文献   

19.
The diverse forms and functions of human music place obstacles in the way of an evolutionary reconstruction of its origins. In the absence of any obvious homologues of human music among our closest primate relatives, theorizing about its origins, in order to make progress, needs constraints from the nature of music, the capacities it engages, and the contexts in which it occurs. Here we propose and examine five fundamental constraints that bear on theories of how music and some of its features may have originated. First, cultural transmission, bringing the formal powers of cultural as contrasted with Darwinian evolution to bear on its contents. Second, generativity, i.e. the fact that music generates infinite pattern diversity by finite means. Third, vocal production learning, without which there can be no human singing. Fourth, entrainment with perfect synchrony, without which there is neither rhythmic ensemble music nor rhythmic dancing to music. And fifth, the universal propensity of humans to gather occasionally to sing and dance together in a group, which suggests a motivational basis endemic to our biology. We end by considering the evolutionary context within which these constraints had to be met in the genesis of human musicality.  相似文献   

20.
Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of brain processing. Higher-level modal correspondences between musical timbre and visual colour have been previously investigated, though with limited sets of colour. We developed a novel response method that employs a tablet interface to navigate the CIE Lab colour space. The method was used in an experiment where 27 film music excerpts were presented to participants (n = 22) who continuously manipulated the colour and size of an on-screen patch to match the music. Analysis of the data replicated and extended earlier research, for example, that happy music was associated with yellow, music expressing anger with large red colour patches, and sad music with smaller patches towards dark blue. Correlation analysis suggested patterns of relationships between audio features and colour patch parameters. Using partial least squares regression, we tested models for predicting colour patch responses from audio features and ratings of perceived emotion in the music. Parsimonious models that included emotion robustly explained between 60% and 75% of the variation in each of the colour patch parameters, as measured by cross-validated R 2. To illuminate the quantitative findings, we performed a content analysis of structured spoken interviews with the participants. This provided further evidence of a significant emotion mediation mechanism, whereby people tended to match colour association with the perceived emotion in the music. The mixed method approach of our study gives strong evidence that emotion can mediate crossmodal association between music and visual colour. The CIE Lab interface promises to be a useful tool in perceptual ratings of music and other sounds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号