首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Ganel T  Freud E  Chajut E  Algom D 《PloS one》2012,7(4):e36253

Background

Human resolution for object size is typically determined by psychophysical methods that are based on conscious perception. In contrast, grasping of the same objects might be less conscious. It is suggested that grasping is mediated by mechanisms other than those mediating conscious perception. In this study, we compared the visual resolution for object size of the visuomotor and the perceptual system.

Methodology/Principal Findings

In Experiment 1, participants discriminated the size of pairs of objects once through perceptual judgments and once by grasping movements toward the objects. Notably, the actual size differences were set below the Just Noticeable Difference (JND). We found that grasping trajectories reflected the actual size differences between the objects regardless of the JND. This pattern was observed even in trials in which the perceptual judgments were erroneous. The results of an additional control experiment showed that these findings were not confounded by task demands. Participants were not aware, therefore, that their size discrimination via grasp was veridical.

Conclusions/Significance

We conclude that human resolution is not fully tapped by perceptually determined thresholds. Grasping likely exhibits greater resolving power than people usually realize.  相似文献   

2.

Background

Converging evidence indicates that action observation and action-related sounds activate cross-modally the human motor system. Since olfaction, the most ancestral sense, may have behavioural consequences on human activities, we causally investigated by transcranial magnetic stimulation (TMS) whether food odour could additionally facilitate the human motor system during the observation of grasping objects with alimentary valence, and the degree of specificity of these effects.

Methodology/Principal Findings

In a repeated-measure block design, carried out on 24 healthy individuals participating to three different experiments, we show that sniffing alimentary odorants immediately increases the motor potentials evoked in hand muscles by TMS of the motor cortex. This effect was odorant-specific and was absent when subjects were presented with odorants including a potentially noxious trigeminal component.The smell-induced corticospinal facilitation of hand muscles during observation of grasping was an additive effect which superimposed to that induced by the mere observation of grasping actions for food or non-food objects. The odour-induced motor facilitation took place only in case of congruence between the sniffed odour and the observed grasped food, and specifically involved the muscle acting as prime mover for hand/fingers shaping in the observed action.

Conclusions/Significance

Complex olfactory cross-modal effects on the human corticospinal system are physiologically demonstrable. They are odorant-specific and, depending on the experimental context, muscle- and action-specific as well. This finding implies potential new diagnostic and rehabilitative applications.  相似文献   

3.

Background

Substantial literature has demonstrated that how the hand approaches an object depends on the manipulative action that will follow object contact. Little is known about how the placement of individual fingers on objects is affected by the end-goal of the action.

Methodology/Principal Findings

Hand movement kinematics were measured during reaching for and grasping movements towards two objects (stimuli): a bottle with an ordinary cylindrical shape and a bottle with a concave constriction. The effects of the stimuli''s weight (half full or completely full of water) and the end-goals (pouring, moving) of the action were also assessed. Analysis of key kinematic landmarks measured during reaching movements indicate that object affordance facilitates the end-goal of the action regardless of accuracy constraints. Furthermore, the placement of individual digits at contact is modulated by the shape of the object and the end-goal of the action.

Conclusions/Significance

These findings offer a substantial contribution to the current debate about the role played by affordances and end-goals in determining the structure of reach-to-grasp movements.  相似文献   

4.

Background

The autonomic nervous system (ANS) is activated in parallel with the motor system during cyclical and effortful imagined actions. However, it is not clear whether the ANS is activated during motor imagery of discrete movements and whether this activation is specific to the movement being imagined. Here, we explored these topics by studying the baroreflex control of the cardiovascular system.

Methodology/Principal Findings

Arterial pressure and heart rate were recorded in ten subjects who executed or imagined trunk or leg movements against gravity. Trunk and leg movements result in different physiological reactions (orthostatic hypotension phenomenon) when they are executed. Interestingly, ANS activation significantly, but similarly, increased during imagined trunk and leg movements. Furthermore, we did not observe any physiological modulation during a control mental-arithmetic task or during motor imagery of effortless movements (horizontal wrist displacements).

Conclusions/Significance

We concluded that ANS activation during motor imagery is general and not specific and physiologically prepares the organism for the upcoming effortful action.  相似文献   

5.

Background

This study explored whether the high-resolution representations created by visual working memory (VWM) are constructed in a coarse-to-fine or all-or-none manner. The coarse-to-fine hypothesis suggests that coarse information precedes detailed information in entering VWM and that its resolution increases along with the processing time of the memory array, whereas the all-or-none hypothesis claims that either both enter into VWM simultaneously, or neither does.

Methodology/Principal Findings

We tested the two hypotheses by asking participants to remember two or four complex objects. An ERP component, contralateral delay activity (CDA), was used as the neural marker. CDA is higher for four objects than for two objects when coarse information is primarily extracted; yet, this CDA difference vanishes when detailed information is encoded. Experiment 1 manipulated the comparison difficulty of the task under a 500-ms exposure time to determine a condition in which the detailed information was maintained. No CDA difference was found between two and four objects, even in an easy-comparison condition. Thus, Experiment 2 manipulated the memory array’s exposure time under the easy-comparison condition and found a significant CDA difference at 100 ms while replicating Experiment 1′s results at 500 ms. In Experiment 3, the 500-ms memory array was blurred to block the detailed information; this manipulation reestablished a significant CDA difference.

Conclusions/Significance

These findings suggest that the creation of high-resolution representations in VWM is a coarse-to-fine process.  相似文献   

6.

Background

Humans are able to track multiple simultaneously moving objects. A number of factors have been identified that can influence the ease with which objects can be attended and tracked. Here, we explored the possibility that object tracking abilities may be specialized for tracking biological targets such as people.

Methodology/Principal Findings

We used the Multiple Object Tracking (MOT) paradigm to explore whether the high-level biological status of the targets affects the efficiency of attentional selection and tracking. In Experiment 1, we assessed the tracking of point-light biological motion figures. As controls, we used either the same stimuli or point-light letters, presented in upright, inverted or scrambled configurations. While scrambling significantly affected performance for both letters and point-light figures, there was an effect of inversion restricted to biological motion, inverted figures being harder to track. In Experiment 2, we found that tracking performance was equivalent for natural point-light walkers and ‘moon-walkers’, whose implied direction was incongruent with their actual direction of motion. In Experiment 3, we found higher tracking accuracy for inverted faces compared with upright faces. Thus, there was a double dissociation between inversion effects for biological motion and faces, with no inversion effect for our non-biological stimuli (letters, houses).

Conclusions/Significance

MOT is sensitive to some, but not all naturalistic aspects of biological stimuli. There does not appear to be a highly specialized role for tracking people. However, MOT appears constrained by principles of object segmentation and grouping, where effectively grouped, coherent objects, but not necessarily biological objects, are tracked most successfully.  相似文献   

7.
Cothros N  Wong J  Gribble PL 《PloS one》2008,3(4):e1990

Background

Previous studies of learning to adapt reaching movements in the presence of novel forces show that learning multiple force fields is prone to interference. Recently it has been suggested that force field learning may reflect learning to manipulate a novel object. Within this theoretical framework, interference in force field learning may be the result of static tactile or haptic cues associated with grasp, which fail to indicate changing dynamic conditions. The idea that different haptic cues (e.g. those associated with different grasped objects) signal motor requirements and promote the learning and retention of multiple motor skills has previously been unexplored in the context of force field learning.

Methodology/Principle Findings

The present study tested the possibility that interference can be reduced when two different force fields are associated with differently shaped objects grasped in the hand. Human subjects were instructed to guide a cursor to targets while grasping a robotic manipulandum, which applied two opposing velocity-dependent curl fields to the hand. For one group of subjects the manipulandum was fitted with two different handles, one for each force field. No attenuation in interference was observed in these subjects relative to controls who used the same handle for both force fields.

Conclusions/Significance

These results suggest that in the context of the present learning paradigm, haptic cues on their own are not sufficient to reduce interference and promote learning multiple force fields.  相似文献   

8.

Background

Previous research has shown that object recognition may develop well into late childhood and adolescence. The present study extends that research and reveals novel differences in holistic and analytic recognition performance in 7–12 year olds compared to that seen in adults. We interpret our data within a hybrid model of object recognition that proposes two parallel routes for recognition (analytic vs. holistic) modulated by attention.

Methodology/Principal Findings

Using a repetition-priming paradigm, we found in Experiment 1 that children showed no holistic priming, but only analytic priming. Given that holistic priming might be thought to be more ‘primitive’, we confirmed in Experiment 2 that our surprising finding was not because children’s analytic recognition was merely a result of name repetition.

Conclusions/Significance

Our results suggest a developmental primacy of analytic object recognition. By contrast, holistic object recognition skills appear to emerge with a much more protracted trajectory extending into late adolescence.  相似文献   

9.

Background

When viewing complex scenes, East Asians attend more to contexts whereas Westerners attend more to objects, reflecting cultural differences in holistic and analytic visual processing styles respectively. This eye-tracking study investigated more specific mechanisms and the robustness of these cultural biases in visual processing when salient changes in the objects and backgrounds occur in complex pictures.

Methodology/Principal Findings

Chinese Singaporean (East Asian) and Caucasian US (Western) participants passively viewed pictures containing selectively changing objects and background scenes that strongly captured participants'' attention in a data-driven manner. We found that although participants from both groups responded to object changes in the pictures, there was still evidence for cultural divergence in eye-movements. The number of object fixations in the US participants was more affected by object change than in the Singapore participants. Additionally, despite the picture manipulations, US participants consistently maintained longer durations for both object and background fixations, with eye-movements that generally remained within the focal objects. In contrast, Singapore participants had shorter fixation durations with eye-movements that alternated more between objects and backgrounds.

Conclusions/Significance

The results demonstrate a robust cultural bias in visual processing even when external stimuli draw attention in an opposite manner to the cultural bias. These findings also extend previous studies by revealing more specific, but consistent, effects of culture on the different aspects of visual attention as measured by fixation duration, number of fixations, and saccades between objects and backgrounds.  相似文献   

10.
Harris IM  Dux PE  Benito CT  Leek EC 《PloS one》2008,3(5):e2256

Background

An ongoing debate in the object recognition literature centers on whether the shape representations used in recognition are coded in an orientation-dependent or orientation-invariant manner. In this study, we asked whether the nature of the object representation (orientation-dependent vs orientation-invariant) depends on the information-processing stages tapped by the task.

Methodology/ Findings

We employed a repetition priming paradigm in which briefly presented masked objects (primes) were followed by an upright target object which had to be named as rapidly as possible. The primes were presented for variable durations (ranging from 16 to 350 ms) and in various image-plane orientations (from 0° to 180°, in 30° steps). Significant priming was obtained for prime durations above 70 ms, but not for prime durations of 16 ms and 47 ms, and did not vary as a function of prime orientation. In contrast, naming the same objects that served as primes resulted in orientation-dependent reaction time costs.

Conclusions/Significance

These results suggest that initial processing of object identity is mediated by orientation-independent information and that orientation costs in performance arise when objects are consolidated in visual short-term memory in order to be reported.  相似文献   

11.

Background

How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object''s stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information.

Methodology/Principal Findings

In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity).

Conclusions/Significance

Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.  相似文献   

12.

Background

A person is less likely to be accurately remembered if they appear in a visual scene with a gun, a result that has been termed the weapon focus effect (WFE). Explanations of the WFE argue that weapons engage attention because they are unusual and/or threatening, which causes encoding deficits for the other items in the visual scene. Previous WFE research has always embedded the weapon and nonweapon objects within a larger context that provides information about an actor''s intention to use the object. As such, it is currently unknown whether a gun automatically engages attention to a greater extent than other objects independent of the context in which it is presented.

Method

Reflexive responding to a gun compared to other objects was examined in two experiments. Experiment 1 employed a prosaccade gap-overlap paradigm, whereby participants looked toward a peripheral target, and Experiment 2 employed an antisaccade gap-overlap paradigm, whereby participants looked away from a peripheral target. In both experiments, the peripheral target was a gun or a nonthreatening object (i.e., a tomato or pocket watch). We also controlled how unexpected the targets were and compared saccadic reaction times across types of objects.

Results

A gun was not found to differentially engage attention compared to the unexpected object (i.e., a pocket watch). Some evidence was found (Experiment 2) that both the gun and the unexpected object engaged attention to a greater extent compared the expected object (i.e., a tomato).

Conclusion

An image of a gun did not engage attention to a larger extent than images of other types of objects (i.e., a pocket watch or tomato). The results suggest that context may be an important determinant of WFE. The extent to which an object is threatening may depend on the larger context in which it is presented.  相似文献   

13.

Background

It has been reported that participants judge an object to be closer after a stick has been used to touch it than after touching it with the hand. In this study we try to find out why this is so.

Methodology

We showed six participants a cylindrical object on a table. On separate trials (randomly intermixed) participants either estimated verbally how far the object is from their body or they touched a remembered location. Touching was done either with the hand or with a stick (in separate blocks). In three different sessions, participants touched either the object location or the location halfway to the object location. Verbal judgments were given either in centimeters or in terms of whether the object would be reachable with the hand. No differences in verbal distance judgments or touching responses were found between the blocks in which the stick or the hand was used.

Conclusion

Instead of finding out why the judged distance changes when using a tool, we found that using a stick does not necessarily alter judged distances or judgments about the reachability of objects.  相似文献   

14.

Background

Visually determining what is reachable in peripersonal space requires information about the egocentric location of objects but also information about the possibilities of action with the body, which are context dependent. The aim of the present study was to test the role of motor representations in the visual perception of peripersonal space.

Methodology

Seven healthy participants underwent a TMS study while performing a right-left decision (control) task or perceptually judging whether a visual target was reachable or not with their right hand. An actual grasping movement task was also included. Single pulse TMS was delivered 80% of the trials on the left motor and premotor cortex and on a control site (the temporo-occipital area), at 90% of the resting motor threshold and at different SOA conditions (50ms, 100ms, 200ms or 300ms).

Principal Findings

Results showed a facilitation effect of the TMS on reaction times in all tasks, whatever the site stimulated and until 200ms after stimulus presentation. However, the facilitation effect was on average 34ms lower when stimulating the motor cortex in the perceptual judgement task, especially for stimuli located at the boundary of peripersonal space.

Conclusion

This study provides the first evidence that brain motor area participate in the visual determination of what is reachable. We discuss how motor representations may feed the perceptual system with information about possible interactions with nearby objects and thus may contribute to the perception of the boundary of peripersonal space.  相似文献   

15.

Background

Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it.

Methodology/Principal Findings

From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection.

Conclusions/Significance

A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion.  相似文献   

16.

Background

How do people sustain a visual representation of the environment? Currently, many researchers argue that a single visual working memory system sustains non-spatial object information such as colors and shapes. However, previous studies tested visual working memory for two-dimensional objects only. In consequence, the nature of visual working memory for three-dimensional (3D) object representation remains unknown.

Methodology/Principal Findings

Here, I show that when sustaining information about 3D objects, visual working memory clearly divides into two separate, specialized memory systems, rather than one system, as was previously thought. One memory system gradually accumulates sensory information, forming an increasingly precise view-dependent representation of the scene over the course of several seconds. A second memory system sustains view-invariant representations of 3D objects. The view-dependent memory system has a storage capacity of 3–4 representations and the view-invariant memory system has a storage capacity of 1–2 representations. These systems can operate independently from one another and do not compete for working memory storage resources.

Conclusions/Significance

These results provide evidence that visual working memory sustains object information in two separate, specialized memory systems. One memory system sustains view-dependent representations of the scene, akin to the view-specific representations that guide place recognition during navigation in humans, rodents and insects. The second memory system sustains view-invariant representations of 3D objects, akin to the object-based representations that underlie object cognition.  相似文献   

17.

Background

When people are asked to adjust the color of familiar objects such as fruits until they appear achromatic, the subjective gray points of the objects are shifted away from the physical gray points in a direction opposite to the memory color (memory color effect). It is still unclear whether the discrepancy between memorized and actual colors of objects is dependent on the familiarity of the objects. Here, we conducted two experiments in order to examine the relationship between the degree of a subject’s familiarity with objects and the degree of the memory color effect by using logographs of food and beverage companies.

Methods and Findings

In Experiment 1, we measured the memory color effects of logos which varied in terms of their familiarity (high, middle, or low). Results demonstrate that the memory color effect occurs only in the high-familiarity condition, but not in the middle- and low-familiarity conditions. Furthermore, there is a positive correlation between the memory color effect and the actual number of domestic stores of the brand. In Experiment 2, we assessed the semantic association between logos and food/beverage names by using a semantic priming task to elucidate whether the memory color effect of logos relates to consumer brand cognition, and found that the semantic associations between logos and food/beverage names in the high-familiarity brands were stronger than those in the low-familiarity brands only when the logos were colored correctly, but not when they were appropriately or inappropriately colored, or achromatic.

Conclusion

The current results provide behavioral evidence of the relationship between the familiarity of objects and the memory color effect and suggest that the memory color effect increases with the familiarity of objects, albeit not constantly.  相似文献   

18.

Background

Contemporary theories of motor control propose that motor planning involves the prediction of the consequences of actions. These predictions include the associated costs as well as the rewarding nature of movements’ outcomes. Within the estimation of these costs and rewards would lie the valence, that is, the pleasantness or unpleasantness of a given stimulus with which one is about to interact. The aim of this study was to test if motor preparation encompasses valence.

Methodology/Principal Findings

The readiness potential, an electrophysiological marker of motor preparation, was recorded before the grasping of pleasant, neutral and unpleasant stimuli. Items used were balanced in weight and placed inside transparent cylinders to prompt a similar grip among trials. Compared with neutral stimuli, the grasping of pleasant stimuli was preceded by a readiness potential of lower amplitude, whereas that of unpleasant stimuli was associated with a readiness potential of higher amplitude.

Conclusions/Significance

We show for the first time that the sensorimotor cortex activity preceding the grasping of a stimulus is affected by its valence. Smaller readiness potential amplitudes found for pleasant stimuli could imply in the recruitment of pre-set motor repertoires, whereas higher amplitudes found for unpleasant stimuli would emerge from a discrepancy between the required action and their aversiveness. Our results indicate that the prediction of action outcomes encompasses an estimate of the valence of a stimulus with which one is about to interact.  相似文献   

19.

Background

Cerebral activation during planning of reaching movements occurs both in the superior parietal lobule (SPL) and premotor cortex (PM), and their activation seems to take place in parallel.

Methodology

The activation of the SPL and PM has been investigated using transcranial magnetic stimulation (TMS) during planning of reaching movements under visual guidance.

Principal Findings

A facilitory effect was found when TMS was delivered on the parietal cortex at about half of the time from sight of the target to hand movement, independently of target location in space. Furthermore, at the same stimulation time, a similar facilitory effect was found in PM, which is probably related to movement preparation.

Conclusions

This data contributes to the understanding of cortical dynamics in the parieto-frontal network, and suggests that it is possible to interfere with the planning of reaching movements at different cortical points within a particular time window. Since similar effects may be produced at similar times on both the SPL and PM, parallel processing of visuomotor information is likely to take place in these regions.  相似文献   

20.

Background

The simultaneous tracking and identification of multiple moving objects encountered in everyday life requires one to correctly bind identities to objects. In the present study, we investigated the role of spatial configuration made by multiple targets when observers are asked to track multiple moving objects with distinct identities.

Methodology/Principal Findings

The overall spatial configuration made by the targets was manipulated: In the constant condition, the configuration remained as a virtual convex polygon throughout the tracking, and in the collapsed condition, one of the moving targets (critical target) crossed over an edge of the virtual polygon during tracking, destroying it. Identification performance was higher when the configuration remained intact than when it collapsed (Experiments 1a, 1b, and 2). Moreover, destroying the configuration affected the allocation of dynamic attention: the critical target captured more attention than did the other targets. However, observers were worse at identifying the critical target and were more likely to confuse it with the targets that formed the virtual crossed edge (Experiments 3–5). Experiment 6 further showed that the visual system constructs an overall configuration only by using the targets (and not the distractors); identification performance was not affected by whether the distractor violated the spatial configuration.

Conclusions/Significance

In sum, these results suggest that the visual system may integrate targets (but not distractors) into a spatial configuration during multiple identity tracking, which affects the distribution of dynamic attention and the updating of identity-location binding.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号