首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

The ability to detect and integrate associations between unrelated items that are close in space and time is a key feature of human learning and memory. Learning sequential associations between non-adjacent visual stimuli (higher-order visuospatial dependencies) can occur either with or without awareness (explicit vs. implicit learning) of the products of learning. Existing behavioural and neurocognitive studies of explicit and implicit sequence learning, however, are based on conscious access to the sequence of target locations and, typically, on conditions where the locations for orienting, or motor, responses coincide with the locations of the target sequence.

Methodology/Principal Findings

Dichoptic stimuli were presented on a novel sequence learning task using a mirror stereoscope to mask the eye-of-origin of visual input from conscious awareness. We demonstrate that conscious access to the sequence of target locations and responses that coincide with structure of the target sequence are dispensable features when learning higher-order visuospatial associations. Sequence knowledge was expressed in the ability of participants to identify the trained higher-order visuospatial sequence on a recognition test, even though the trained and untrained recognition sequences were identical when viewed at a conscious binocular level, and differed only at the level of the masked sequential associations.

Conclusions/Significance

These results demonstrate that unconscious processing can support perceptual learning of higher-order sequential associations through interocular integration of retinotopic-based codes stemming from monocular eye-of-origin information. Furthermore, unlike other forms of perceptual associative learning, visuospatial attention did not need to be directed to the locations of the target sequence. More generally, the results pose a challenge to neural models of learning to account for a previously unknown capacity of the human visual system to support the detection, learning and recognition of higher-order sequential associations under conditions where observers are unable to see the target sequence or perform responses that coincide with structure of the target sequence.  相似文献   

2.
Ganel T  Freud E  Chajut E  Algom D 《PloS one》2012,7(4):e36253

Background

Human resolution for object size is typically determined by psychophysical methods that are based on conscious perception. In contrast, grasping of the same objects might be less conscious. It is suggested that grasping is mediated by mechanisms other than those mediating conscious perception. In this study, we compared the visual resolution for object size of the visuomotor and the perceptual system.

Methodology/Principal Findings

In Experiment 1, participants discriminated the size of pairs of objects once through perceptual judgments and once by grasping movements toward the objects. Notably, the actual size differences were set below the Just Noticeable Difference (JND). We found that grasping trajectories reflected the actual size differences between the objects regardless of the JND. This pattern was observed even in trials in which the perceptual judgments were erroneous. The results of an additional control experiment showed that these findings were not confounded by task demands. Participants were not aware, therefore, that their size discrimination via grasp was veridical.

Conclusions/Significance

We conclude that human resolution is not fully tapped by perceptually determined thresholds. Grasping likely exhibits greater resolving power than people usually realize.  相似文献   

3.

Background

‘Self-objectification’ is the tendency to experience one''s body principally as an object, to be evaluated for its appearance rather than for its effectiveness. Within objectification theory, it has been proposed that self-objectification accounts for the poorer interoceptive awareness observed in women, as measured by heartbeat perception. Our study is, we believe, the first specifically to test this relationship.

Methodology/Principal Findings

Using a well-validated and reliable heartbeat perception task, we measured interoceptive awareness in women and compared this with their scores on the Self-Objectification Questionnaire, the Self-Consciousness Scale and the Body Consciousness Questionnaire. Interoceptive awareness was negatively correlated with self-objectification. Interoceptive awareness, public body consciousness and private body consciousness together explained 31% of the variance in self-objectification. However, private body consciousness was not significantly correlated with interoceptive awareness, which may explain the many nonsignificant results in self-objectification studies that have used private body consciousness as a measure of body awareness.

Conclusions/Significance

We propose interoceptive awareness, assessed by heartbeat perception, as a measure of body awareness in self-objectification studies. Our findings have implications for those clinical conditions, in women, which are characterised by self-objectification and low interoceptive awareness, such as eating disorders.  相似文献   

4.

Background

Our expectations of an object''s heaviness not only drive our fingertip forces, but also our perception of heaviness. This effect is highlighted by the classic size-weight illusion (SWI), where different-sized objects of identical mass feel different weights. Here, we examined whether these expectations are sufficient to induce the SWI in a single wooden cube when lifted without visual feedback, by varying the size of the object seen prior to the lift.

Methodology/Principal Findings

Participants, who believed that they were lifting the same object that they had just seen, reported that the weight of the single, standard-sized cube that they lifted on every trial varied as a function of the size of object they had just seen. Seeing the small object before the lift made the cube feel heavier than it did after seeing the large object. These expectations also affected the fingertip forces that were used to lift the object when vision was not permitted. The expectation-driven errors made in early trials were not corrected with repeated lifting, and participants failed to adapt their grip and load forces from the expected weight to the object''s actual mass in the same way that they could when lifting with vision.

Conclusions/Significance

Vision appears to be crucial for the detection, and subsequent correction, of the ostensibly non-visual grip and load force errors that are a common feature of this type of object interaction. Expectations of heaviness are not only powerful enough to alter the perception of a single object''s weight, but also continually drive the forces we use to lift the object when vision is unavailable.  相似文献   

5.

Background

Research on multisensory integration during natural tasks such as reach-to-grasp is still in its infancy. Crossmodal links between vision, proprioception and audition have been identified, but how olfaction contributes to plan and control reach-to-grasp movements has not been decisively shown. We used kinematics to explicitly test the influence of olfactory stimuli on reach-to-grasp movements.

Methodology/Principal Findings

Subjects were requested to reach towards and grasp a small or a large visual target (i.e., precision grip, involving the opposition of index finger and thumb for a small size target and a power grip, involving the flexion of all digits around the object for a large target) in the absence or in the presence of an odour evoking either a small or a large object that if grasped would require a precision grip and a whole hand grasp, respectively. When the type of grasp evoked by the odour did not coincide with that for the visual target, interference effects were evident on the kinematics of hand shaping and the level of synergies amongst fingers decreased. When the visual target and the object evoked by the odour required the same type of grasp, facilitation emerged and the intrinsic relations amongst individual fingers were maintained.

Conclusions/Significance

This study demonstrates that olfactory information contains highly detailed information able to elicit the planning for a reach-to-grasp movement suited to interact with the evoked object. The findings offer a substantial contribution to the current debate about the multisensory nature of the sensorimotor transformations underlying grasping.  相似文献   

6.

Background

In everyday life, signals of danger, such as aversive facial expressions, usually appear in the peripheral visual field. Although facial expression processing in central vision has been extensively studied, this processing in peripheral vision has been poorly studied.

Methodology/Principal Findings

Using behavioral measures, we explored the human ability to detect fear and disgust vs. neutral expressions and compared it to the ability to discriminate between genders at eccentricities up to 40°. Responses were faster for the detection of emotion compared to gender. Emotion was detected from fearful faces up to 40° of eccentricity.

Conclusions

Our results demonstrate the human ability to detect facial expressions presented in the far periphery up to 40° of eccentricity. The increasing advantage of emotion compared to gender processing with increasing eccentricity might reflect a major implication of the magnocellular visual pathway in facial expression processing. This advantage may suggest that emotion detection, relative to gender identification, is less impacted by visual acuity and within-face crowding in the periphery. These results are consistent with specific and automatic processing of danger-related information, which may drive attention to those messages and allow for a fast behavioral reaction.  相似文献   

7.

Background

Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues.

Methodology/Principal Findings

Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery.

Conclusions/Significance

Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.  相似文献   

8.

Background

When viewing complex scenes, East Asians attend more to contexts whereas Westerners attend more to objects, reflecting cultural differences in holistic and analytic visual processing styles respectively. This eye-tracking study investigated more specific mechanisms and the robustness of these cultural biases in visual processing when salient changes in the objects and backgrounds occur in complex pictures.

Methodology/Principal Findings

Chinese Singaporean (East Asian) and Caucasian US (Western) participants passively viewed pictures containing selectively changing objects and background scenes that strongly captured participants'' attention in a data-driven manner. We found that although participants from both groups responded to object changes in the pictures, there was still evidence for cultural divergence in eye-movements. The number of object fixations in the US participants was more affected by object change than in the Singapore participants. Additionally, despite the picture manipulations, US participants consistently maintained longer durations for both object and background fixations, with eye-movements that generally remained within the focal objects. In contrast, Singapore participants had shorter fixation durations with eye-movements that alternated more between objects and backgrounds.

Conclusions/Significance

The results demonstrate a robust cultural bias in visual processing even when external stimuli draw attention in an opposite manner to the cultural bias. These findings also extend previous studies by revealing more specific, but consistent, effects of culture on the different aspects of visual attention as measured by fixation duration, number of fixations, and saccades between objects and backgrounds.  相似文献   

9.

Background

Visual neglect is an attentional deficit typically resulting from parietal cortex lesion and sometimes frontal lesion. Patients fail to attend to objects and events in the visual hemifield contralateral to their lesion during visual search.

Methodology/Principal Finding

The aim of this work was to examine the effects of parietal and frontal lesion in an existing computational model of visual attention and search and simulate visual search behaviour under lesion conditions. We find that unilateral parietal lesion in this model leads to symptoms of visual neglect in simulated search scan paths, including an inhibition of return (IOR) deficit, while frontal lesion leads to milder neglect and to more severe deficits in IOR and perseveration in the scan path. During simulations of search under unilateral parietal lesion, the model''s extrastriate ventral stream area exhibits lower activity for stimuli in the neglected hemifield compared to that for stimuli in the normally perceived hemifield. This could represent a computational correlate of differences observed in neuroimaging for unconscious versus conscious perception following parietal lesion.

Conclusions/Significance

Our results lead to the prediction, supported by effective connectivity evidence, that connections between the dorsal and ventral visual streams may be an important factor in the explanation of perceptual deficits in parietal lesion patients and of conscious perception in general.  相似文献   

10.

Background

When two targets are presented in close temporal proximity amongst a rapid serial visual stream of distractors, a period of disrupted attention and attenuated awareness lasting 200–500 ms follows identification of the first target (T1). This phenomenon is known as the “attentional blink” (AB) and is generally attributed to a failure to consolidate information in visual short-term memory due to depleted or disrupted attentional resources. Previous research has shown that items presented during the AB that fail to reach conscious awareness are still processed to relatively high levels, including the level of meaning. For example, missed word stimuli have been shown to prime later targets that are closely associated words. Although these findings have been interpreted as evidence for semantic processing during the AB, closely associated words (e.g., day-night) may also rely on specific, well-worn, lexical associative links which enhance attention to the relevant target.

Methodology/Principal Findings

We used a measure of semantic distance to create prime-target pairs that are conceptually close, but have low word associations (e.g., wagon and van) and investigated priming from a distractor stimulus presented during the AB to a subsequent target (T2). The stimuli were words (concrete nouns) in Experiment 1 and the corresponding pictures of objects in Experiment 2. In both experiments, report of T2 was facilitated when this item was preceded by a semantically-related distractor.

Conclusions/Significance

This study is the first to show conclusively that conceptual information is extracted from distractor stimuli presented during a period of attenuated awareness and that this information spreads to neighbouring concepts within a semantic network.  相似文献   

11.

Background

How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object''s stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information.

Methodology/Principal Findings

In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity).

Conclusions/Significance

Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.  相似文献   

12.
Heath M  Maraj A  Godbolt B  Binsted G 《PloS one》2008,3(10):e3539

Background

Previous work by our group has shown that the scaling of reach trajectories to target size is independent of obligatory awareness of that target property and that “action without awareness” can persist for up to 2000 ms of visual delay. In the present investigation we sought to determine if the ability to scale reaching trajectories to target size following a delay is related to the pre-computing of movement parameters during initial stimulus presentation or the maintenance of a sensory (i.e., visual) representation for on-demand response parameterization.

Methodology/Principal Findings

Participants completed immediate or delayed (i.e., 2000 ms) perceptual reports and reaching responses to different sized targets under non-masked and masked target conditions. For the reaching task, the limb associated with a trial (i.e., left or right) was not specified until the time of response cuing: a manipulation that prevented participants from pre-computing the effector-related parameters of their response. In terms of the immediate and delayed perceptual tasks, target size was accurately reported during non-masked trials; however, for masked trials only a chance level of accuracy was observed. For the immediate and delayed reaching tasks, movement time as well as other temporal kinematic measures (e.g., times to peak acceleration, velocity and deceleration) increased in relation to decreasing target size across non-masked and masked trials.

Conclusions/Significance

Our results demonstrate that speed-accuracy relations were observed regardless of whether participants were aware (i.e., non-masked trials) or unaware (i.e., masked trials) of target size. Moreover, the equivalent scaling of immediate and delayed reaches during masked trials indicates that a persistent sensory-based representation supports the unconscious and metrical scaling of memory-guided reaching.  相似文献   

13.

Background

A vast body of social and cognitive psychology studies in humans reports evidence that external rewards, typically monetary ones, undermine intrinsic motivation. These findings challenge the standard selfish-rationality assumption at the core of economic reasoning. In the present work we aimed at investigating whether the different modulation of a given monetary reward automatically and unconsciously affects effort and performance of participants involved in a game devoid of visual and verbal interaction and without any perspective-taking activity.

Methodology/Principal Findings

Twelve pairs of participants were submitted to a simple motor coordination game while recording the electromyographic activity of First Dorsal Interosseus (FDI), the muscle mainly involved in the task. EMG data show a clear effect of alternative rewards strategies on subjects'' motor behavior. Moreover, participants'' stock of relevant past social experiences, measured by a specifically designed questionnaire, was significantly correlated with EMG activity, showing that only low social capital subjects responded to monetary incentives consistently with a standard rationality prediction.

Conclusions/Significance

Our findings show that the effect of extrinsic motivations on performance may arise outside social contexts involving complex cognitive processes due to conscious perspective-taking activity. More importantly, the peculiar performance of low social capital individuals, in agreement with standard economic reasoning, adds to the knowledge of the circumstances that makes the crowding out/in of intrinsic motivation likely to occur. This may help in improving the prediction and accuracy of economic models and reconcile this puzzling effect of external incentives with economic theory.  相似文献   

14.

Background

While considerable scientific effort has been devoted to studying how birds navigate over long distances, relatively little is known about how targets are detected, obstacles are avoided and smooth landings are orchestrated. Here we examine how visual features in the environment, such as contrasting edges, determine where a bird will land.

Methodology/Principal Findings

Landing in budgerigars (Melopsittacus undulatus) was investigated by training them to fly from a perch to a feeder, and video-filming their landings. The feeder was placed on a grey disc that produced a contrasting edge against a uniformly blue background. We found that the birds tended to land primarily at the edge of the disc and walk to the feeder, even though the feeder was in the middle of the disc. This suggests that the birds were using the visual contrast at the boundary of the disc to target their landings. When the grey level of the disc was varied systematically, whilst keeping the blue background constant, there was one intermediate grey level at which the budgerigar''s preference for the disc boundary disappeared. The budgerigars then landed randomly all over the test surface. Even though this disc is (for humans) clearly distinguishable from the blue background, it offers very little contrast against the background, in the red and green regions of the spectrum.

Conclusions

We conclude that budgerigars use visual edges to target and guide landings. Calculations of photoreceptor excitation reveal that edge detection in landing budgerigars is performed by a color-blind luminance channel that sums the signals from the red and green photoreceptors, or, alternatively, receives input from the red double-cones. This finding has close parallels to vision in honeybees and primates, where edge detection and motion perception are also largely color-blind.  相似文献   

15.

Purpose

Baseball requires an incredible amount of visual acuity and eye-hand coordination, especially for the batters. The learning objective of this work is to observe that traditional vision training as part of injury prevention or conditioning can be added to a team''s training schedule to improve some performance parameters such as batting and hitting.

Methods

All players for the 2010 to 2011 season underwent normal preseason physicals and baseline testing that is standard for the University of Cincinnati Athletics Department. Standard vision training exercises were implemented 6 weeks before the start of the season. Results are reported as compared to the 2009 to 2010 season. Pre season conditioning was followed by a maintenance program during the season of vision training.

Results

The University of Cincinnati team batting average increased from 0.251 in 2010 to 0.285 in 2011 and the slugging percentage increased by 0.033. The rest of the Big East''s slugging percentage fell over that same time frame 0.082. This produces a difference of 0.115 with 95% confidence interval (0.024, 0.206). As with the batting average, the change for University of Cincinnati is significantly different from the rest of the Big East (p = 0.02). Essentially all batting parameters improved by 10% or more. Similar differences were seen when restricting the analysis to games within the Big East conference.

Conclusion

Vision training can combine traditional and technological methodologies to train the athletes'' eyes and improve batting. Vision training as part of conditioning or injury prevention can be applied and may improve batting performance in college baseball players. High performance vision training can be instituted in the pre-season and maintained throughout the season to improve batting parameters.  相似文献   

16.

Background

ZK 200775 is an antagonist at the α-Amino-3-hydroxy-5-methyl-4-isoxazolepropionate (AMPA) receptor and had earned attention as a possible neuroprotective agent in cerebral ischemia. Probands receiving the agent within phase I trials reported on an alteration of visual perception. In this trial, the effects of ZK 200775 on the visual system were analyzed in detail.

Methodology

In a randomised controlled trial we examined eyes and vision before and after the intravenous administration of two different doses of ZK 200775 and placebo. There were 3 groups of 6 probands each: Group 1 recieved 0.03 mg/kg/h, group 2 0.75 mg/kg/h of ZK 200775, the control group received 0.9% sodium chloride solution. Probands were healthy males aged between 57 and 69 years. The following methods were applied: clinical examination, visual acuity, ophthalmoscopy, colour vision, rod absolute threshold, central visual field, pattern-reversal visual evoked potentials (pVEP), ON-OFF and full-field electroretinogram (ERG).

Principal Findings

No effect of ZK 200775 was seen on eye position or motility, stereopsis, pupillary function or central visual field testing. Visual acuity and dark vision deteriorated significantly in both treated groups. Color vision was most remarkably impaired. The dark-adapted ERG revealed a reduction of oscillatory potentials (OP) and partly of the a- and b-wave, furthermore an alteration of b-wave morphology and an insignificantly elevated b/a-ratio. Cone-ERG modalities showed decreased amplitudes and delayed implicit times. In the ON-OFF ERG the ON-answer amplitudes increased whereas the peak times of the OFF-answer were reduced. The pattern VEP exhibited lower amplitudes and prolonged peak times.

Conclusions

The AMPA receptor blockade led to a strong impairment of typical OFF-pathway functions like color vision and the cone ERG. On the other hand the ON-pathway as measured by dark vision and the scotopic ERG was affected as well. This further elucidates the interdependence of both pathways.

Trial Registration

ClinicalTrials.gov NCT00999284  相似文献   

17.
18.

Background

Animal vision spans a great range of complexity, with systems evolving to detect variations in light intensity, distribution, colour, and polarisation. Polarisation vision systems studied to date detect one to four channels of linear polarisation, combining them in opponent pairs to provide intensity-independent operation. Circular polarisation vision has never been seen, and is widely believed to play no part in animal vision.

Methodology/Principal Findings

Polarisation is fully measured via Stokes'' parameters—obtained by combined linear and circular polarisation measurements. Optimal polarisation vision is the ability to see Stokes'' parameters: here we show that the crustacean Gonodactylus smithii measures the exact components required.

Conclusions/Significance

This vision provides optimal contrast-enhancement and precise determination of polarisation with no confusion states or neutral points—significant advantages. Linear and circular polarisation each give partial information about the polarisation of light—but the combination of the two, as we will show here, results in optimal polarisation vision. We suggest that linear and circular polarisation vision not be regarded as different modalities, since both are necessary for optimal polarisation vision; their combination renders polarisation vision independent of strongly linearly or circularly polarised features in the animal''s environment.  相似文献   

19.

Background

Male and female avian brood parasites are subject to different selection pressures: males compete for mates but do not provide parental care or territories and only females locate hosts to lay eggs. This sex difference may affect brain architecture in some avian brood parasites, but relatively little is known about their sensory systems and behaviors used to obtain sensory information. Our goal was to study the visual resolution and visual information gathering behavior (i.e., scanning) of brown-headed cowbirds.

Methodology/Principal Findings

We measured the density of single cone photoreceptors, associated with chromatic vision, and double cone photoreceptors, associated with motion detection and achromatic vision. We also measured head movement rates, as indicators of visual information gathering behavior, when exposed to an object. We found that females had significantly lower density of single and double cones than males around the fovea and in the periphery of the retina. Additionally, females had significantly higher head-movement rates than males.

Conclusions

Overall, we suggest that female cowbirds have lower chromatic and achromatic visual resolution than males (without sex differences in visual contrast perception). Females might compensate for the lower visual resolution by gazing alternatively with both foveae in quicker succession than males, increasing their head movement rates. However, other physiological factors may have influenced the behavioral differences observed. Our results bring up relevant questions about the sensory basis of sex differences in behavior. One possibility is that female and male cowbirds differentially allocate costly sensory resources, as a recent study found that females actually have greater auditory resolution than males.  相似文献   

20.

Purpose

To investigate the effect of ageing on visuomotor function and subsequently evaluate the effect of visual field loss on such function in older adults.

Methods

Two experiments were performed: 1) to determine the effect of ageing on visual localisation and subsequent pointing precision, and 2) to determine the effect of visual field loss on these outcome measures. For Experiment 1, we measured visual localisation and pointing precision radially at visual eccentricities of 5, 10 and 15° in 25 older (60–72 years) and 25 younger (20–31 years) adults. In the pointing task, participants were asked to point to a target on a touchscreen at a natural pace that prioritised accuracy of the touch. In Experiment 2, a subset of these tasks were performed at 15° eccentricity under both monocular and binocular conditions, by 8 glaucoma (55–76 years) and 10 approximately age-matched controls (61–72 years).

Results

Visual localisation and pointing precision was unaffected by ageing (p>0.05) and visual field loss (p>0.05), although movement time was increased in glaucoma (p = 0.01).

Conclusion

Visual localisation and pointing precision to high contrast stimuli within the central 15° of vision are unaffected by ageing. Even in the presence of significant visual field loss, older adults with glaucoma are able perform such tasks with reasonable precision provided the target can be perceived and movement time is not restricted.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号