首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bystanders in a real world''s social setting have the ability to influence people’s beliefs and behavior. This study examines whether this effect can be recreated in a virtual environment, by exposing people to virtual bystanders in a classroom setting. Participants (n = 26) first witnessed virtual students answering questions from an English teacher, after which they were also asked to answer questions from the teacher as part of a simulated training for spoken English. During the experiment the attitudes of the other virtual students in the classroom was manipulated; they could whisper either positive or negative remarks to each other when a virtual student was talking or when a participant was talking. The results show that the expressed attitude of virtual bystanders towards the participants affected their self-efficacy, and their avoidance behavior. Furthermore, the experience of witnessing bystanders commenting negatively on the performance of other students raised the participants’ heart rate when it was their turn to speak. Two-way interaction effects were also found on self-reported anxiety and self-efficacy. After witnessing bystanders’ positive attitude towards peer students, participants’ self-efficacy when answering questions received a boost when bystanders were also positive towards them, and a blow when bystanders reversed their attitude by being negative towards them. Still, inconsistency, instead of consistency, between the bystanders’ attitudes towards virtual peers and the participants was not found to result in a larger change in the participants’ beliefs. Finally the results also reveal that virtual flattering or destructive criticizing affected the participants’ beliefs not only about the virtual bystanders, but also about the neutral teacher. Together these findings show that virtual bystanders in a classroom can affect people’s beliefs, anxiety and behavior.  相似文献   

2.
Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in which 16 right-handed subjects copied geometric figures while the result of drawing remained out of sight. Either the size of the example figure varied while maintaining a constant size of drawing (visual incongruity) or the size of the examples remained constant while subjects were instructed to make changes in size (motor incongruity). These incongruent were compared to congruent conditions. Statistical Parametric Mapping (SPM8) revealed brain activations related to size incongruity in the dorsolateral prefrontal and inferior parietal cortex, pre-SMA / anterior cingulate and anterior insula, dominant in the right hemisphere. This pattern represented simultaneous use of a ‘resized’ virtual template and actual picture information requiring spatial working memory, early-stage attention shifting and inhibitory control. Activations were strongest in motor incongruity while right pre-dorsal premotor activation specifically occurred in this condition. Visual incongruity additionally relied on a ventral visual pathway. Left ventral premotor activation occurred in all variably sized drawing while constant visuomotor size, compared to congruent size variation, uniquely activated the lateral occipital cortex additional to superior parietal regions. These results highlight size as a fundamental parameter in both general hand movement and movement guided by objects perceived in the context of surrounding 3D space.  相似文献   

3.
To react efficiently to potentially threatening stimuli, we have to be able to localize these stimuli in space. In daily life we are constantly moving so that our limbs can be positioned at the opposite side of space. Therefore, a somatotopic frame of reference is insufficient to localize nociceptive stimuli. Here we investigated whether nociceptive stimuli are mapped into a spatiotopic frame of reference, and more specifically a peripersonal frame of reference, which takes into account the position of the body limbs in external space, as well as the occurrence of external objects presented near the body. Two temporal order judgment (TOJ) experiments were conducted, during which participants had to decide which of two nociceptive stimuli, one applied to either hand, had been presented first while their hands were either uncrossed or crossed over the body midline. The occurrence of the nociceptive stimuli was cued by uninformative visual cues. We found that the visual cues prioritized the perception of nociceptive stimuli applied to the hand laying in the cued side of space, irrespective of posture. Moreover, the influence of the cues was smaller when they were presented far in front of participants’ hands as compared to when they were presented in close proximity. Finally, participants’ temporal sensitivity was reduced by changing posture. These findings are compatible with the existence of a peripersonal frame of reference for the localization of nociceptive stimuli. This allows for the construction of a stable representation of our body and the space closely surrounding our body, enabling a quick and efficient reaction to potential physical threats.  相似文献   

4.
Brain regions in the intraparietal and the premotor cortices selectively process visual and multisensory events near the hands (peri-hand space). Visual information from the hand itself modulates this processing potentially because it is used to estimate the location of one’s own body and the surrounding space. In humans specific occipitotemporal areas process visual information of specific body parts such as hands. Here we used an fMRI block-design to investigate if anterior intraparietal and ventral premotor ‘peri-hand areas’ exhibit selective responses to viewing images of hands and viewing specific hand orientations. Furthermore, we investigated if the occipitotemporal ‘hand area’ is sensitive to viewed hand orientation. Our findings demonstrate increased BOLD responses in the left anterior intraparietal area when participants viewed hands and feet as compared to faces and objects. Anterior intraparietal and also occipitotemporal areas in the left hemisphere exhibited response preferences for viewing right hands with orientations commonly viewed for one’s own hand as compared to uncommon own hand orientations. Our results indicate that both anterior intraparietal and occipitotemporal areas encode visual limb-specific shape and orientation information.  相似文献   

5.
Successful navigation requires the ability to compute one’s location and heading from incoming multisensory information. Previous work has shown that this multisensory input comes in two forms: body-based idiothetic cues, from one’s own rotations and translations, and visual allothetic cues, from the environment (usually visual landmarks). However, exactly how these two streams of information are integrated is unclear, with some models suggesting the body-based idiothetic and visual allothetic cues are combined, while others suggest they compete. In this paper we investigated the integration of body-based idiothetic and visual allothetic cues in the computation of heading using virtual reality. In our experiment, participants performed a series of body turns of up to 360 degrees in the dark with only a brief flash (300ms) of visual feedback en route. Because the environment was virtual, we had full control over the visual feedback and were able to vary the offset between this feedback and the true heading angle. By measuring the effect of the feedback offset on the angle participants turned, we were able to determine the extent to which they incorporated visual feedback as a function of the offset error. By further modeling this behavior we were able to quantify the computations people used. While there were considerable individual differences in performance on our task, with some participants mostly ignoring the visual feedback and others relying on it almost entirely, our modeling results suggest that almost all participants used the same strategy in which idiothetic and allothetic cues are combined when the mismatch between them is small, but compete when the mismatch is large. These findings suggest that participants update their estimate of heading using a hybrid strategy that mixes the combination and competition of cues.  相似文献   

6.
Social psychology is fundamentally the study of individuals in groups, yet there remain basic unanswered questions about group formation, structure, and change. We argue that the problem is methodological. Until recently, there was no way to track who was interacting with whom with anything approximating valid resolution and scale. In the current study we describe a new method that applies recent advances in image-based tracking to study incipient group formation and evolution with experimental precision and control. In this method, which we term “in vivo behavioral tracking,” we track individuals’ movements with a high definition video camera mounted atop a large field laboratory. We report results of an initial study that quantifies the composition, structure, and size of the incipient groups. We also apply in-vivo spatial tracking to study participants’ tendency to cooperate as a function of their embeddedness in those crowds. We find that participants form groups of seven on average, are more likely to approach others of similar attractiveness and (to a lesser extent) gender, and that participants’ gender and attractiveness are both associated with their proximity to the spatial center of groups (such that women and attractive individuals are more likely than men and unattractive individuals to end up in the center of their groups). Furthermore, participants’ proximity to others early in the study predicted the effort they exerted in a subsequent cooperative task, suggesting that submergence in a crowd may predict social loafing. We conclude that in vivo behavioral tracking is a uniquely powerful new tool for answering longstanding, fundamental questions about group dynamics.  相似文献   

7.
Body image disturbance (BID), considered a key feature in eating disorders, is a pervasive issue among young women. Accurate assessment of BID is critical, but the field is currently limited to self-report assessment methods. In the present study, we build upon existing research, and explore the utility of virtual reality (VR) to elicit and detect changes in BID across various immersive virtual environments. College-aged women with elevated weight and shape concerns (n = 38) and a non-weight and shape concerned control group (n = 40) were randomly exposed to four distinct virtual environments with high or low levels of body salience and social presence (i.e., presence of virtual others). Participants interacted with avatars of thin, normal weight, and overweight body size (BMI of approximately 18, 22, and 27 respectively) in virtual social settings (i.e., beach, party). We measured state-level body satisfaction (state BD) immediately after exposure to each environment. In addition, we measured participants’ minimum interpersonal distance, visual attention, and approach preference toward avatars of each size. Women with higher baseline BID reported significantly higher state BD in all settings compared to controls. Both groups reported significantly higher state BD in a beach with avatars as compared to other environments. In addition, women with elevated BID approached closer to normal weight avatars and looked longer at thin avatars compared to women in the control group. Our findings indicate that VR may serve as a novel tool for measuring state-level BID, with applications for measuring treatment outcomes. Implications for future research and clinical interventions are discussed.  相似文献   

8.
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed “size constancy”. It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the ‘sonar aperture’, i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats.  相似文献   

9.
10.
We know much about mechanisms determining the perceived size and weight of lifted objects, but little about how these properties of size and weight affect the body representation (e.g. grasp aperture of the hand). Without vision, subjects (n = 16) estimated spacing between fingers and thumb (perceived grasp aperture) while lifting canisters of the same width (6.6cm) but varied weights (300, 600, 900, and 1200 g). Lifts were performed by movement of either the wrist, elbow or shoulder to examine whether lifting with different muscle groups affects the judgement of grasp aperture. Results for perceived grasp aperture were compared with changes in perceived weight of objects of different sizes (5.2, 6.6, and 10 cm) but the same weight (600 g). When canisters of the same width but different weights were lifted, perceived grasp aperture decreased 4.8% [2.2 ‒ 7.4] (mean [95% CI]; P < 0.001) from the lightest to the heaviest canister, no matter how they were lifted. For objects of the same weight but different widths, perceived weight decreased 42.3% [38.2 ‒ 46.4] from narrowest to widest (P < 0.001), as expected from the size-weight illusion. Thus, despite a highly distorted perception of the weight of objects based on their size, we conclude that proprioceptive afferents maintain a reasonably stable perception of the aperture of the grasping hand over a wide range of object weights. Given the small magnitude of this ‘weight-grasp aperture’ illusion, we propose the brain has access to a relatively stable ‘perceptual ruler’ to aid the manipulation of different objects.  相似文献   

11.
12.
It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped “glaven”) for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object’s shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions–e.g., the participants’ performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.  相似文献   

13.
Organelles serve as biochemical reactors in the cell, and often display characteristic scaling trends with cell size, suggesting mechanisms that coordinate their sizes. In this study, we measure the vacuole-cell size scaling trends in budding yeast using optical microscopy and a novel, to our knowledge, image analysis algorithm. Vacuole volume and surface area both show characteristic scaling trends with respect to cell size that are consistent among different strains. Rapamycin treatment was found to increase vacuole-cell size scaling trends for both volume and surface area. Unexpectedly, these increases did not depend on macroautophagy, as similar increases in vacuole size were observed in the autophagy deficient mutants atg1Δ and atg5Δ. Rather, rapamycin appears to act on vacuole size by inhibiting retrograde membrane trafficking, as the atg18Δ mutant, which is defective in retrograde trafficking, shows similar vacuole size scaling to rapamycin-treated cells and is itself insensitive to rapamycin treatment. Disruption of anterograde membrane trafficking in the apl5Δ mutant leads to complementary changes in vacuole size scaling. These quantitative results lead to a simple model for vacuole size scaling based on proportionality between cell growth rates and vacuole growth rates.  相似文献   

14.
张庆印  樊军 《生态学报》2013,33(24):7739-7747
在GIS和RS技术支持下,以农牧交错带六道沟小流域2010年的WV-1高精度遥感影像为基础,编制了小流域景观类型图,选取经典景观指数,在类型和整体景观的水平上,探讨了农牧交错带小流域景观格局指数随粒度变化的基本规律,并分析了各景观格局指数间的相关性。结果表明:在1-50 m粒度范围内,农牧交错带六道沟小流域景观格局指数的“临界粒度”现象明显,总体而言,在0.5 m分辨率下六道沟小流域的景观格局指数的“临界粒度”为10 m,适宜计算的粒度范围为5-10 m,所以在利用高精度影像对小流域景观进行预测、对比和评价等研究时,需注意粒度的影响并进行一定的粒度转换。农牧交错带小流域景观形态具有分形特征,各类景观斑块的分维数对粒度变化的响应不同,分维数随粒度的增大呈非线性下降趋势,表明景观类型边界趋于简单化。相关性分析一方面定量反映了所选景观指数受粒度变化影响的相关性程度,另一方面可为后续农牧交错带小流域因“退耕还林(草)”工程引起的景观格局变化研究提供参考。  相似文献   

15.
To examine whether the recent price patterns and transaction costs of Bitcoin represent a general characteristic of decentralized virtual currencies, we analyze virtual currencies in online games that have been voluntarily managed by individuals since 1990s. We find that matured game currencies have price stability similar to that of small size equities or gold, and their transaction costs are sometimes lower than real currencies. Assuming that virtual currencies with a longer history can provide an estimate for Bitcoin’s prospects, we project that Bitcoin will be less influenced by speculative trades and become a low cost alternative to real currencies.  相似文献   

16.
Anticipatory force planning during grasping is based on visual cues about the object’s physical properties and sensorimotor memories of previous actions with grasped objects. Vision can be used to estimate object mass based on the object size to identify and recall sensorimotor memories of previously manipulated objects. It is not known whether subjects can use density cues to identify the object’s center of mass (CM) and create compensatory moments in an anticipatory fashion during initial object lifts to prevent tilt. We asked subjects (n = 8) to estimate CM location of visually symmetric objects of uniform densities (plastic or brass, symmetric CM) and non-uniform densities (mixture of plastic and brass, asymmetric CM). We then asked whether subjects can use density cues to scale fingertip forces when lifting the visually symmetric objects of uniform and non-uniform densities. Subjects were able to accurately estimate an object’s center of mass based on visual density cues. When the mass distribution was uniform, subjects could scale their fingertip forces in an anticipatory fashion based on the estimation. However, despite their ability to explicitly estimate CM location when object density was non-uniform, subjects were unable to scale their fingertip forces to create a compensatory moment and prevent tilt on initial lifts. Hefting object parts in the hand before the experiment did not affect this ability. This suggests a dichotomy between the ability to accurately identify the object’s CM location for objects with non-uniform density cues and the ability to utilize this information to correctly scale their fingertip forces. These results are discussed in the context of possible neural mechanisms underlying sensorimotor integration linking visual cues and anticipatory control of grasping.  相似文献   

17.
Effective vision for action and effective management of concurrent spatial relations underlie skillful manipulation of objects, including hand tools, in humans. Children’s performance in object insertion tasks (fitting tasks) provides one index of the striking changes in the development of vision for action in early life. Fitting tasks also tap children’s ability to work with more than one feature of an object concurrently. We examine young children’s performance on fitting tasks in two and three dimensions and compare their performance with the previously reported performance of adult individuals of two species of nonhuman primates on similar tasks. Two, three, and four year-old children routinely aligned a bar-shaped stick and a cross-shaped stick but had difficulty aligning a tomahawk-shaped stick to a matching cut-out. Two year-olds were especially challenged by the tomahawk. Three and four year-olds occasionally held the stick several inches above the surface, comparing the stick to the surface visually, while trying to align it. The findings suggest asynchronous development in the ability to use vision to achieve alignment and to work with two and three spatial features concurrently. Using vision to align objects precisely to other objects and managing more than one spatial relation between an object and a surface are already more elaborated in two year-old humans than in other primates. The human advantage in using hand tools derives in part from this fundamental difference in the relation between vision and action between humans and other primates.  相似文献   

18.
In dynamic environments, it is crucial to accurately consider the timing of information. For instance, during saccades the eyes rotate so fast that even small temporal errors in relating retinal stimulation by flashed stimuli to extra-retinal information about the eyes’ orientations will give rise to substantial errors in where the stimuli are judged to be. If spatial localization involves judging the eyes’ orientations at the estimated time of the flash, we should be able to manipulate the pattern of mislocalization by altering the estimated time of the flash. We reasoned that if we presented a relevant flash within a short rapid sequence of irrelevant flashes, participants’ estimates of when the relevant flash was presented might be shifted towards the centre of the sequence. In a first experiment, we presented five bars at different positions around the time of a saccade. Four of the bars were black. Either the second or the fourth bar in the sequence was red. The task was to localize the red bar. We found that when the red bar was presented second in the sequence, it was judged to be further in the direction of the saccade than when it was presented fourth in the sequence. Could this be because the red bar was processed faster when more black bars preceded it? In a second experiment, a red bar was either presented alone or followed by two black bars. When two black bars followed it, it was judged to be further in the direction of the saccade. We conclude that the spatial localization of flashed stimuli involves judging the eye orientation at the estimated time of the flash.  相似文献   

19.
One major question in molecular biology is whether the spatial distribution of observed molecules is random or organized in clusters. Indeed, this analysis gives information about molecules’ interactions and physical interplay with their environment. The standard tool for analyzing molecules’ distribution statistically is the Ripley’s K function, which tests spatial randomness through the computation of its critical quantiles. However, quantiles’ computation is very cumbersome, hindering its use. Here, we present an analytical expression of these quantiles, leading to a fast and robust statistical test, and we derive the characteristic clusters’ size from the maxima of the Ripley’s K function. Subsequently, we analyze the spatial organization of endocytic spots at the cell membrane and we report that clathrin spots are randomly distributed while clathrin-independent spots are organized in clusters with a radius of , which suggests distinct physical mechanisms and cellular functions for each pathway.  相似文献   

20.
Active exploration of large-scale environments leads to better learning of spatial layout than does passive observation [1] [2] [3]. But active exploration might also help us to remember the appearance of individual objects in a scene. In fact, when we encounter new objects, we often manipulate them so that they can be seen from a variety of perspectives. We present here the first evidence that active control of the visual input in this way facilitates later recognition of objects. Observers who actively rotated novel, three-dimensional objects on a computer screen later showed more efficient visual recognition than observers who passively viewed the exact same sequence of images of these virtual objects. During active exploration, the observers focused mainly on the 'side' or 'front' views of the objects (see also [4] [5] [6]). The results demonstrate that how an object is represented for later recognition is influenced by whether or not one controls the presentation of visual input during learning.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号