首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Vertebrate sensory systems are generally based on bilaterally symmetrical sense organs. It is evident, nevertheless, that birds preferentially use either their left or right eye for viewing novel or familiar stimuli [1], and perform visual discrimination tasks under monocular viewing conditions better with one eye than with the other [2] [3]. Because of the nearly complete contralateral decussation of the optic nerves in birds [4], it has been assumed that this division of labour is due solely to cerebral hemispheric specialisation, generated as a result of uneven photostimulation of the eyes of the developing embryo during the last three or four days before hatching [5] [6]. Here, however, we present evidence that in the European starling, Sturnus vulgaris, even the retinae are morphologically asymmetrical in terms of photoreceptor distribution. This is the first evidence for such asymmetry in any bird and suggests that retinal photoreceptor composition should be assessed during studies involving the lateralisation of visually mediated behaviours.  相似文献   

2.
Bees were trained to discriminate visual patterns in five experiments. The rewarded pattern (S+), was a 50-mm black disc in all experiments; the unrewarded pattern (S–) was varied. Subsequently bees were given a choice between different stimuli in order to discover what bees learnt about five attributes of the training stimuli. The attributes tested were size, contrast, color, ‘compactness’ vs. ‘dissectedness’ (tests with ring-patterns), and presence or absence of acute points (tests with discs, squares, triangles and stars). The significance of these attributes varied with the particular unrewarded pattern (S–) used in training (Figs. 1, 2). This is interpreted as a modification of the bee's selective attention to certain features during training. The results also indicate a difference in the salience of attributes. Differences in size or outline (presence of acute points) only influenced the bee's preference after a training that specifically required this distinction, while differences in contrast, colour and dissectedness were also significant when the training stimuli did not differ in that respect.  相似文献   

3.
Visual Ecology and Perception of Coloration Patterns by Domestic Chicks   总被引:3,自引:0,他引:3  
This article suggests how we might understand the way potential predators see coloration patterns used in aposematism and visual mimicry. We start by briefly reviewing work on evolutionary function of eyes and neural mechanisms of vision. Often mechanisms used for achromatic vision are accurately modeled as adaptations for detection and recognition of the generality of optical stimuli, rather than specific stimuli such as biological signals. Colour vision is less well understood, but for photoreceptor spectral sensitivities of birds and hymenopterans there is no evidence for adaptations to species-specific stimuli, such as those of food or mates. Turning to experimental work, we investigate how achromatic and chromatic stimuli are used for object recognition by foraging domestic chicks (Gallus gallus). Chicks use chromatic and achromatic signals in different ways: discrimination of large targets uses (chromatic) colour differences, and chicks remember chromatic signals accurately. However, detection of small targets, and discrimination of visual textures requires achromatic contrast. The different roles of chromatic and achromatic information probably reflect their utility for object recognition in nature. Achromatic (intensity) variation exceeds chromatic variation, and hence is more informative about change in reflectance – for example, object borders, while chromatic signals yield more information about surface reflectance (object colour) under variable illumination. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

4.
Abstract— Over the 20-min period following exposure of young chicks to a flashing light as an imprinting stimulus there is an increased incorporation of [14C]leucine into an acidic (tubulin-enriched) protein fraction of the anterior dorsal forebrain in birds which have learnt the characteristics of the stimulus as compared with, either birds which have been exposed to an imprinting stimulus but learn poorly, or chicks kept in the dark. This brain region has been implicated in several studies as the locus for a number of biochemical modulations that accompany learning. The amount of [14C]leucine incorporated does not seem to be determined by precursor pool availability; it does, however, correlate with a well-validated measure of the extent to which birds have learnt to recognise the characteristics of the stimulus, as shown by a two-choice discrimination test. There is no change in the total content of tubulin dimer as assayed by colchicine binding under these conditions. Additionally, in birds which show evidence of learning, the binding of quinuclidinyl benzilate, an irreversible muscarinic ligand, is altered in both the posterior dorsal forebrain and midbrain regions. None of these effects could be simply the result of visual stimulation. The meaning of these changes is discussed.  相似文献   

5.
Conspicuousness is an important feature of warning coloration. One hypothesis for its function is that it increases signal efficacy by facilitating avoidance learning. An alternative, based on the handicap hypothesis, suggests that the degree of conspicuousness holds information directly about the quality of the prey, and that predators associate and learn about the conspicuousness of the coloration, and not the actual colour pattern. We studied the relative importance of signal contrast and the colours of signals for predator attention during discrimination. We used young chicks, Gallus gallus domesticus, as predators and small blue or red paper cones on either matching or contrasting paper backgrounds as stimuli associated with palatable or unpalatable chick crumbs. In four treatment groups, birds could use either cone and/or background colour, cone colour only, background colour only or cone-to-background contrast as cues for discrimination. Only birds in the contrast treatment failed to learn their discrimination task. Birds that had a choice between cone and background colour as cues used the cone colour and they learned the task faster than did birds that had to use background colour as a cue. The results suggest that birds primarily attend to the colours of signals and disregard contrast in discrimination tasks; they thus fail to support a handicap function of conspicuous aposematic coloration. Copyright 2003 Published by Elsevier Science Ltd on behalf of The Association for the Study of Animal Behaviour.   相似文献   

6.
When attention is directed to a region of space, visual resolution at that location flexibly adapts, becoming sharper to resolve fine-scale details or coarser to reflect large-scale texture and surface properties [1]. By what mechanism does attention improve spatial resolution? An improved signal-to-noise ratio (SNR) at the attended location contributes [2], because of retinotopically specific signal gain [3], [4], [5], [6], [7], [8], [9] and [10]. Additionally, attention could sharpen position tuning at the neural population level, so that adjacent objects activate more distinct regions of the visual cortex. A dual mechanism involving both signal gain and sharpened position tuning would be highly efficient at improving visual resolution, but there is no direct evidence that attention can narrow the position tuning of population responses. Here, we compared the spatial spread of the fMRI BOLD response for attended versus ignored stimuli. The activity produced by adjacent stimuli overlapped less when subjects were attending at their locations versus attending elsewhere, despite a stronger peak response with attention. Our results show that even as early as primary visual cortex (V1), spatially directed attention narrows the tuning of population-coded position representations.  相似文献   

7.
Using a simultaneous discrimination procedure it was shown that pigeons were capable of learning to discriminate 100 different black and white visual patterns from a further 625 similar stimuli, where responses to the former were rewarded and responses to the latter were not rewarded. Tests in which novel stimuli replaced either the rewarded or nonrewarded stimuli showed that the pigeons had not only learned about the 100 positive stimuli but also about the 625 negative stimuli. The fact that novel stimuli enhanced discrimination performance when they replaced the many negative stimuli indicated that the pigeons had categorized the stimuli into two classes, familiar and less familiar. Long-term retention was examined after a 6-month interval. To begin with it seemed poor but a recognition test performed after the subjects had been retrained with a subset of the stimuli after an interval of 7 months confirmed that pigeons are capable of retaining in memory several 100 visual items over an extended period. It is proposed that the initial retrieval weakness was due to a forgetting of the contingencies between stimulus categories and response outcomes. Further tests involving variously modified stimuli indicated that while stimulus size variations had a negative effect on performance, orientation changes did not interfere with recognition, supporting the view that small visual stimuli are memorized by pigeons largely free of orientation labels. The experiment generally confirms that pigeons have the capacity of storing information about a large number of visual stimuli over long periods of time.  相似文献   

8.
Humans are able to efficiently learn and remember complex visual patterns after only a few seconds of exposure [1]. At a cellular level, such learning is thought to involve changes in synaptic efficacy, which have been linked to the precise timing of action potentials relative to synaptic inputs [2-4]. Previous experiments have tapped into the timing of neural spiking events by using repeated asynchronous presentation of visual stimuli to induce changes in both the tuning properties of visual neurons and the perception of simple stimulus attributes [5, 6]. Here we used a similar approach to investigate potential mechanisms underlying the perceptual learning of face identity, a high-level stimulus property based on the spatial configuration of local features. Periods of stimulus pairing induced a systematic bias in face-identity perception in a manner consistent with the predictions of spike timing-dependent plasticity. The perceptual shifts induced for face identity were tolerant to a 2-fold change in stimulus size, suggesting that they reflected neuronal changes in nonretinotopic areas, and were more than twice as strong as the perceptual shifts induced for low-level visual features. These results support the idea that spike timing-dependent plasticity can rapidly adjust the neural encoding of high-level stimulus attributes [7-11].  相似文献   

9.
Neural correlates of social target value in macaque parietal cortex   总被引:1,自引:0,他引:1  
Animals as diverse as arthropods [1], fish [2], reptiles [3], birds [4], and mammals, including primates [5], depend on visually acquired information about conspecifics for survival and reproduction. For example, mate localization often relies on vision [6], and visual cues frequently advertise sexual receptivity or phenotypic quality [5]. Moreover, recognizing previously encountered competitors or individuals with preestablished territories [7] or dominance status [1, 5] can eliminate the need for confrontation and the associated energetic expense and risk for injury. Furthermore, primates, including humans, tend to look toward conspecifics and objects of their attention [8, 9], and male monkeys will forego juice rewards to view images of high-ranking males and female genitalia [10]. Despite these observations, we know little about how the brain evaluates social information or uses this appraisal to guide behavior. Here, we show that neurons in the primate lateral intraparietal area (LIP), a cortical area previously linked to attention and saccade planning [11, 12], signal the value of social information when this assessment influences orienting decisions. In contrast, social expectations had no impact on LIP neuron activity when monkeys were not required to make a choice. These results demonstrate for the first time that parietal cortex carries abstract, modality-independent target value signals that inform the choice of where to look.  相似文献   

10.
The navigational strategies that are used by foraging ants and bees to reach a goal are similar to those of birds and mammals. Species from all these groups use path integration and memories of visual landmarks to navigate through familiar terrain. Insects have far fewer neural resources than vertebrates, so data from insects might be useful in revealing the essential components of efficient navigation. Recent work on ants and bees has uncovered a major role for associative links between long-term memories. We emphasize the roles of these associations in the reliable recognition of visual landmarks and the reliable performance of learnt routes. It is unknown whether such associations also provide insects with a map-like representation of familiar terrain. We suggest, however, that landmarks act primarily as signposts that tell insects what particular action they need to perform, rather than telling them where they are.  相似文献   

11.
We review data from both ethology and psychology about generalization, that is how animals respond to sets of stimuli including familiar and novel stimuli. Our main conclusion is that patterns of generalization are largely independent of systematic group (evidence is available for insects, fish, amphibians, reptiles, birds and mammals, including humans), behavioural context (feeding, drinking, courting, etc.), sensory modality (light, sound, etc.) and of whether reaction to stimuli is learned or genetically inherited. These universalities suggest that generalization originates from general properties of nervous systems, and that evolutionary strategies to cope with novelty and variability in stimulation may be limited. Two major shapes of the generalization gradient can be identified, corresponding to two types of stimulus dimensions. When changes in stimulation involve a rearrangement of a constant amount of stimulation on the sense organs, the generalization gradient peaks close to familiar stimuli, and peak responding is not much higher than responding to familiar stimuli. Contrary to what is often claimed, such gradients are better described by Gaussian curves than by exponentials. When the stimulus dimension involves a variation in the intensity of stimulation, the gradient is often monotonic, and responding to some novel stimuli is considerably stronger than responding to familiar stimuli. Lastly, when several or many familiar stimuli are close to each other predictable biases in responding occur, along all studied dimensions. We do not find differences between biases referred to as peak shift and biases referred to as supernormal stimulation. We conclude by discussing theoretical issues.  相似文献   

12.
Social animals learn to perceive their social environment, and their social skills and preferences are thought to emerge from greater exposure to and hence familiarity with some social signals rather than others. Familiarity appears to be tightly linked to multisensory integration. The ability to differentiate and categorize familiar and unfamiliar individuals and to build a multisensory representation of known individuals emerges from successive social interactions, in particular with adult, experienced models. In different species, adults have been shown to shape the social behavior of young by promoting selective attention to multisensory cues. The question of what representation of known conspecifics adult-deprived animals may build therefore arises. Here we show that starlings raised with no experience with adults fail to develop a multisensory representation of familiar and unfamiliar starlings. Electrophysiological recordings of neuronal activity throughout the primary auditory area of these birds, while they were exposed to audio-only or audiovisual familiar and unfamiliar cues, showed that visual stimuli did, as in wild-caught starlings, modulate auditory responses but that, unlike what was observed in wild-caught birds, this modulation was not influenced by familiarity. Thus, adult-deprived starlings seem to fail to discriminate between familiar and unfamiliar individuals. This suggests that adults may shape multisensory representation of known individuals in the brain, possibly by focusing the young's attention on relevant, multisensory cues. Multisensory stimulation by experienced, adult models may thus be ubiquitously important for the development of social skills (and of the neural properties underlying such skills) in a variety of species.  相似文献   

13.
Alarm calls are vocalisations animals give in response to predators which mainly function to alert conspecifics of danger. Studies show that numerous species eavesdrop on heterospecific calls to gain information about predator presence. Responding to heterospecific calls may be a learned or innate response, determined by whether the response occurs with or without prior exposure to the call. In this study, we investigated the presence of eavesdropping behaviour in zebra finches Taeniopygia guttata. This species is not known to possess a distinct alarm call to warn adult conspecifics of a threat, and could be relying on alarm calls of nearby heterospecifics for predator information. We used a playback experiment to expose captive zebra finches to three heterospecific sounds: an unfamiliar alarm call (from the chestnut‐rumped thornbill Acanthiza uropygialis), a familiar alarm call, and a familiar control (both from the noisy miner Manorina melanocephala). These calls were chosen to test if the birds had learnt to distinguish between the function of the two familiar calls, and if the acoustic properties of the unfamiliar alarm indicated presence of a threat to the finches. Our results showed that in response to the thornbill alarm, the birds reduced the rate of production of short calls. However, this decrease was also seen when considering both short and distance calls in response to the control sound. An increase in latency to call was also seen after the control stimulus when compared to the miner alarm. The time spent scanning increased in response to all three stimuli, but this did not differ between stimuli. There were no significant differences when considering the stimulus by time interaction for any of the three vigilance measures. Overall, no strong evidence was found to indicate that the captive zebra finches were responding to the heterospecific alarm stimuli with anti‐predator behaviour.  相似文献   

14.
Polarisation sensitivity (PS) - the ability to detect the orientation of polarised light - occurs in a wide variety of invertebrates [1] [2] and vertebrates [3] [4] [5], many of which are marine species [1]. Of these, the crustacea are particularly well documented in terms of their structural [6] and neural [7] [8] adaptations for PS. The few behavioural studies conducted on crustaceans demonstrate orientation to, or local navigation with, polarised sky patterns [9]. Aside from this, the function of PS in crustaceans, and indeed in most animals, remains obscure. Where PS can be shown to allow perception of polarised light as a 'special sensory quality' [1], separate from intensity or colour, it has been termed polarisation vision (PV). Here, within the remarkable visual system of the stomatopod crustaceans (mantis shrimps) [10], we provide the first demonstration of PV in the crustacea and the first convincing evidence for learning the orientation of polarised light in any animal. Using new polarimetric [11] and photographic methods to examine stomatopods, we found striking patterns of polarisation on their antennae and telson, suggesting that one function of PV in stomatopods may be communication [12]. PV may also be used for tasks such as navigation [5] [9] [13], location of reflective water surfaces [14] and contrast enhancement [1] [15] [16] [17] [18]. It is possible that the stomatopod PV system also contributes to some of these functions.  相似文献   

15.
Brain areas exist that appear to be specialized for the coding of visual space surrounding the body (peripersonal space). In marked contrast to neurons in earlier visual areas, cells have been reported in parietal and frontal lobes that effectively respond only when visual stimuli are located in spatial proximity to a particular body part (for example, face, arm or hand) [1-4]. Despite several single-cell studies, the representation of near visual space has scarcely been investigated in humans. Here we focus on the neuropsychological phenomenon of visual extinction following unilateral brain damage. Patients with this disorder may respond well to a single stimulus in either visual field; however, when two stimuli are presented concurrently, the contralesional stimulus is disregarded or poorly identified. Extinction is commonly thought to reflect a pathological bias in selective vision favoring the ipsilesional side under competitive conditions, as a result of the unilateral brain lesion [5-7]. We examined a parietally damaged patient (D.P.) to determine whether visual extinction is modulated by the position of the hands in peripersonal space. We measured the severity of visual extinction in a task which held constant visual and spatial information about stimuli, while varying the distance between hands and stimuli. We found that selection in the affected visual field was remarkably more efficient when visual events were presented in the space near the contralesional finger than far from it. However, the amelioration of extinction dissolved when hands were covered from view, implying that the effect of hand position was not mediated purely through proprioception. These findings illustrate the importance of the spatial relationship between hand position and object location for the internal construction of visual peripersonal space in humans.  相似文献   

16.
Visual neuroscience has long sought to determine the extent to which stimulus-evoked activity in visual cortex depends on attention and awareness. Some influential theories of consciousness maintain that the allocation of attention is restricted to conscious representations [1, 2]. However, in the load theory of attention [3], competition between task-relevant and task-irrelevant stimuli for limited-capacity attention does not depend on conscious perception of the irrelevant stimuli. The critical test is whether the level of attentional load in a relevant task would determine unconscious neural processing of invisible stimuli. Human participants were scanned with high-field fMRI while they performed a foveal task of low or high attentional load. Irrelevant, invisible monocular stimuli were simultaneously presented peripherally and were continuously suppressed by a flashing mask in the other eye [4]. Attentional load in the foveal task strongly modulated retinotopic activity evoked in primary visual cortex (V1) by the invisible stimuli. Contrary to traditional views [1, 2, 5, 6], we found that availability of attentional capacity determines neural representations related to unconscious processing of continuously suppressed stimuli in human primary visual cortex. Spillover of attention to cortical representations of invisible stimuli (under low load) cannot be a sufficient condition for their awareness.  相似文献   

17.
Quantitative modeling of human brain activity can provide crucial insights about cortical representations [1, 2] and can form the basis for brain decoding devices [3-5]. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity [6-8]. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow [9], so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy [10, 11] encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder [8] by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.  相似文献   

18.
The diurnal hummingbird hawkmoth Macroglossum stellatarum can learn the achromatic (intensity-related) and the chromatic (wavelength-related) aspect of a spectral colour. Free-flying moths learn to discriminate two colours differing in the chromatic aspect of colour fast and with high precision. In contrast, they learn the discrimination of two stimuli differing in the achromatic aspect more slowly and less reliably. When trained to use the chromatic aspect, they disregard the achromatic aspect, and when trained to use the achromatic aspect, they disregard the chromatic aspect, at least to some degree. In a conflicting situation, hummingbird hawkmoths clearly rely on the chromatic aspect of colour. Generally, the moths pay attention to the most reliable cue that allows them to discriminate colours in the learning situation. This is usually the chromatic aspect of the colour but they can learn to attend to the achromatic aspect instead. There is no evidence for relative colour learning, i.e. moths do not learn to choose the longer or shorter of two wavelengths, but it is possible that they learn to choose the darker or brighter shade of a colour, and thereby its relative intensities.  相似文献   

19.
In a complex choice reaction time experiment, patterned stimuli without luminance change were presented, and pattern-specific visual evoked potentials to lower half-field stimulation were recorded. Two experimental conditions were used. The first was the between-field selection, where square patterns were presented in either the lower or the upper half of the visual field. In a given stimulus run one of the half-fields was task-relevant, and the subjects' task was to press a microswitch to stimuli of higher duration value (GO stimuli), while they had to ignore shorter ones, i. e. stimuli of lower apparent spatial contrast (NOGO stimuli). They had to ignore the stimuli appearing in the irrelevant half-field (IRR stimuli). In order to ensure proper fixation, the subjects had to press another microswitch at the onset of a dim light at the fixation point (CRT stimuli). Our second experimental condition was the within-field selection, where the GO, NOGO, and IRR stimuli appeared in the lower half of the visual field. GO and NOGO were square patterns while IRR stimuli were constructed of circles, or vice versa. (The CRT stimuli were the same as in the previous condition.) Three pattern-specific visual evoked potential components were identified, i. e. CI (70 ms latency), CII (100 ms latency), and CIII (170 ms latency). There were marked selective attention effects on both the CI-CII and CII-CIII peak-to-peak amplitudes. In both experimental conditions, responses with the highest amplitude were evoked by the GO type of stimuli, while the IRR stimuli evoked the smallest responses. According to these results, attention effects on the pattern-specific visual evoked potentials in the first 200 ms cannot be attributed to a simple stimulus set kind of selection.  相似文献   

20.
Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea (Nestor notabilis) and pigeons (Columba livia), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号