首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Despite the well-established involvement of both sensory (“bottom-up”) and cognitive (“top-down”) processes in literacy, the extent to which auditory or cognitive (memory or attention) learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported “far-transfer” to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG), memory group (MG), auditory sensory group (SG), placebo group (PG; drawing, painting), and a control, untrained group (CG). Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest), most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention) training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span) within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness), as the PG and CG improved as much as the other trained groups. Further research is required to investigate the effects of various stimuli and lengths of training on the generalization of sensory and cognitive learning to literacy skills.  相似文献   

2.
Songbirds are one of the few groups of animals that learn the sounds used for vocal communication during development. Like humans, songbirds memorize vocal sounds based on auditory experience with vocalizations of adult “tutors”, and then use auditory feedback of self-produced vocalizations to gradually match their motor output to the memory of tutor sounds. In humans, investigations of early vocal learning have focused mainly on perceptual skills of infants, whereas studies of songbirds have focused on measures of vocal production. In order to fully exploit songbirds as a model for human speech, understand the neural basis of learned vocal behavior, and investigate links between vocal perception and production, studies of songbirds must examine both behavioral measures of perception and neural measures of discrimination during development. Here we used behavioral and electrophysiological assays of the ability of songbirds to distinguish vocal calls of varying frequencies at different stages of vocal learning. The results show that neural tuning in auditory cortex mirrors behavioral improvements in the ability to make perceptual distinctions of vocal calls as birds are engaged in vocal learning. Thus, separate measures of neural discrimination and behavioral perception yielded highly similar trends during the course of vocal development. The timing of this improvement in the ability to distinguish vocal sounds correlates with our previous work showing substantial refinement of axonal connectivity in cortico-basal ganglia pathways necessary for vocal learning.  相似文献   

3.
Rapid integration of biologically relevant information is crucial for the survival of an organism. Most prominently, humans should be biased to attend and respond to looming stimuli that signal approaching danger (e.g. predator) and hence require rapid action. This psychophysics study used binocular rivalry to investigate the perceptual advantage of looming (relative to receding) visual signals (i.e. looming bias) and how this bias can be influenced by concurrent auditory looming/receding stimuli and the statistical structure of the auditory and visual signals.Subjects were dichoptically presented with looming/receding visual stimuli that were paired with looming or receding sounds. The visual signals conformed to two different statistical structures: (1) a ‘simple’ random-dot kinematogram showing a starfield and (2) a “naturalistic” visual Shepard stimulus. Likewise, the looming/receding sound was (1) a simple amplitude- and frequency-modulated (AM-FM) tone or (2) a complex Shepard tone. Our results show that the perceptual looming bias (i.e. the increase in dominance times for looming versus receding percepts) is amplified by looming sounds, yet reduced and even converted into a receding bias by receding sounds. Moreover, the influence of looming/receding sounds on the visual looming bias depends on the statistical structure of both the visual and auditory signals. It is enhanced when audiovisual signals are Shepard stimuli.In conclusion, visual perception prioritizes processing of biologically significant looming stimuli especially when paired with looming auditory signals. Critically, these audiovisual interactions are amplified for statistically complex signals that are more naturalistic and known to engage neural processing at multiple levels of the cortical hierarchy.  相似文献   

4.
It is well known that the planum temporale (PT) area in the posterior temporal lobe carries out spectro-temporal analysis of auditory stimuli, which is crucial for speech, for example. There are suggestions that the PT is also involved in auditory attention, specifically in the discrimination and selection of stimuli from the left and right ear. However, direct evidence is missing so far. To examine the role of the PT in auditory attention we asked fourteen participants to complete the Bergen Dichotic Listening Test. In this test two different consonant-vowel syllables (e.g., “ba” and “da”) are presented simultaneously, one to each ear, and participants are asked to verbally report the syllable they heard best or most clearly. Thus attentional selection of a syllable is stimulus-driven. Each participant completed the test three times: after their left and right PT (located with anatomical brain scans) had been stimulated with repetitive transcranial magnetic stimulation (rTMS), which transiently interferes with normal brain functioning in the stimulated sites, and after sham stimulation, where participants were led to believe they had been stimulated but no rTMS was applied (control). After sham stimulation the typical right ear advantage emerged, that is, participants reported relatively more right than left ear syllables, reflecting a left-hemispheric dominance for language. rTMS over the right but not left PT significantly reduced the right ear advantage. This was the result of participants reporting more left and fewer right ear syllables after right PT stimulation, suggesting there was a leftward shift in stimulus selection. Taken together, our findings point to a new function of the PT in addition to auditory perception: particularly the right PT is involved in stimulus selection and (stimulus-driven), auditory attention.  相似文献   

5.
Atypical face processing plays a key role in social interaction difficulties encountered by individuals with autism. In the current fMRI study, the Thatcher illusion was used to investigate several aspects of face processing in 20 young adults with high-functioning autism spectrum disorder (ASD) and 20 matched neurotypical controls. “Thatcherized” stimuli were modified at either the eyes or the mouth and participants discriminated between pairs of faces while cued to attend to either of these features in upright and inverted orientation. Behavioral data confirmed sensitivity to the illusion and intact configural processing in ASD. Directing attention towards the eyes vs. the mouth in upright faces in ASD led to (1) improved discrimination accuracy; (2) increased activation in areas involved in social and emotional processing; (3) increased activation in subcortical face-processing areas. Our findings show that when explicitly cued to attend to the eyes, activation of cortical areas involved in face processing, including its social and emotional aspects, can be enhanced in autism. This suggests that impairments in face processing in autism may be caused by a deficit in social attention, and that giving specific cues to attend to the eye-region when performing behavioral therapies aimed at improving social skills may result in a better outcome.  相似文献   

6.
BackgroundPatients with schizophrenia are deficient in multiple aspects of social cognition, including biological motion perception. In the present study we investigated the ability to read social information from point-light stimuli in schizophrenia.Conclusions/SignificanceThese findings are consistent with theories of “overmentalizing” (excessive attribution of intentionality) in schizophrenia, and suggest that processing social information from biological motion does rely on social cognition abilities.  相似文献   

7.
In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG) pattern, “delta-brushes,” in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW) during routine EEG recording. Stimuli consisted of either low-volume technogenic “clicks” near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus (“click” and voice) was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and “click” stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for “click” and voice stimuli: responses to “clicks” became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory), these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex during fetal stages and provide a potential test of functional cortical maturation during fetal development.  相似文献   

8.
In addition to impairments in social communication and the presence of restricted interests and repetitive behaviors, deficits in sensory processing are now recognized as a core symptom in autism spectrum disorder (ASD). Our ability to perceive and interact with the external world is rooted in sensory processing. For example, listening to a conversation entails processing the auditory cues coming from the speaker (speech content, prosody, syntax) as well as the associated visual information (facial expressions, gestures). Collectively, the “integration” of these multisensory (i.e., combined audiovisual) pieces of information results in better comprehension. Such multisensory integration has been shown to be strongly dependent upon the temporal relationship of the paired stimuli. Thus, stimuli that occur in close temporal proximity are highly likely to result in behavioral and perceptual benefits – gains believed to be reflective of the perceptual system''s judgment of the likelihood that these two stimuli came from the same source. Changes in this temporal integration are expected to strongly alter perceptual processes, and are likely to diminish the ability to accurately perceive and interact with our world. Here, a battery of tasks designed to characterize various aspects of sensory and multisensory temporal processing in children with ASD is described. In addition to its utility in autism, this battery has great potential for characterizing changes in sensory function in other clinical populations, as well as being used to examine changes in these processes across the lifespan.  相似文献   

9.
How does the brain integrate multiple sources of information to support normal sensorimotor and cognitive functions? To investigate this question we present an overall brain architecture (called “the dual intertwined rings architecture”) that relates the functional specialization of cortical networks to their spatial distribution over the cerebral cortex (or “corticotopy”). Recent results suggest that the resting state networks (RSNs) are organized into two large families: 1) a sensorimotor family that includes visual, somatic, and auditory areas and 2) a large association family that comprises parietal, temporal, and frontal regions and also includes the default mode network. We used two large databases of resting state fMRI data, from which we extracted 32 robust RSNs. We estimated: (1) the RSN functional roles by using a projection of the results on task based networks (TBNs) as referenced in large databases of fMRI activation studies; and (2) relationship of the RSNs with the Brodmann Areas. In both classifications, the 32 RSNs are organized into a remarkable architecture of two intertwined rings per hemisphere and so four rings linked by homotopic connections. The first ring forms a continuous ensemble and includes visual, somatic, and auditory cortices, with interspersed bimodal cortices (auditory-visual, visual-somatic and auditory-somatic, abbreviated as VSA ring). The second ring integrates distant parietal, temporal and frontal regions (PTF ring) through a network of association fiber tracts which closes the ring anatomically and ensures a functional continuity within the ring. The PTF ring relates association cortices specialized in attention, language and working memory, to the networks involved in motivation and biological regulation and rhythms. This “dual intertwined architecture” suggests a dual integrative process: the VSA ring performs fast real-time multimodal integration of sensorimotor information whereas the PTF ring performs multi-temporal integration (i.e., relates past, present, and future representations at different temporal scales).  相似文献   

10.
To investigate the role of experience in humans’ perception of emotion using canine visual signals, we asked adults with various levels of dog experience to interpret the emotions of dogs displayed in videos. The video stimuli had been pre-categorized by an expert panel of dog behavior professionals as showing examples of happy or fearful dog behavior. In a sample of 2,163 participants, the level of dog experience strongly predicted identification of fearful, but not of happy, emotional examples. The probability of selecting the “fearful” category to describe fearful examples increased with experience and ranged from.30 among those who had never lived with a dog to greater than.70 among dog professionals. In contrast, the probability of selecting the “happy” category to describe happy emotional examples varied little by experience, ranging from.90 to.93. In addition, the number of physical features of the dog that participants reported using for emotional interpretations increased with experience, and in particular, more-experienced respondents were more likely to attend to the ears. Lastly, more-experienced respondents provided lower difficulty and higher accuracy self-ratings than less-experienced respondents when interpreting both happy and fearful emotional examples. The human perception of emotion in other humans has previously been shown to be sensitive to individual differences in social experience, and the results of the current study extend the notion of experience-dependent processes from the intraspecific to the interspecific domain.  相似文献   

11.

Background

The sound-induced flash illusion is an auditory-visual illusion – when a single flash is presented along with two or more beeps, observers report seeing two or more flashes. Previous research has shown that the illusion gradually disappears as the temporal delay between auditory and visual stimuli increases, suggesting that the illusion is consistent with existing temporal rules of neural activation in the superior colliculus to multisensory stimuli. However little is known about the effect of spatial incongruence, and whether the illusion follows the corresponding spatial rule. If the illusion occurs less strongly when auditory and visual stimuli are separated, then integrative processes supporting the illusion must be strongly dependant on spatial congruence. In this case, the illusion would be consistent with both the spatial and temporal rules describing response properties of multisensory neurons in the superior colliculus.

Methodology/Principal Findings

The main aim of this study was to investigate the importance of spatial congruence in the flash-beep illusion. Selected combinations of one to four short flashes and zero to four short 3.5 KHz tones were presented. Observers were asked to count the number of flashes they saw. After replication of the basic illusion using centrally-presented stimuli, the auditory and visual components of the illusion stimuli were presented either both 10 degrees to the left or right of fixation (spatially congruent) or on opposite (spatially incongruent) sides, for a total separation of 20 degrees.

Conclusions/Significance

The sound-induced flash fission illusion was successfully replicated. However, when the sources of the auditory and visual stimuli were spatially separated, perception of the illusion was unaffected, suggesting that the “spatial rule” does not extend to describing behavioural responses in this illusion. We also find no evidence for an associated “fusion” illusion reportedly occurring when multiple flashes are accompanied by a single beep.  相似文献   

12.
Findings on song perception and song production have increasingly suggested that common but partially distinct neural networks exist for processing lyrics and melody. However, the neural substrates of song recognition remain to be investigated. The purpose of this study was to examine the neural substrates involved in the accessing “song lexicon” as corresponding to a representational system that might provide links between the musical and phonological lexicons using positron emission tomography (PET). We exposed participants to auditory stimuli consisting of familiar and unfamiliar songs presented in three ways: sung lyrics (song), sung lyrics on a single pitch (lyrics), and the sung syllable ‘la’ on original pitches (melody). The auditory stimuli were designed to have equivalent familiarity to participants, and they were recorded at exactly the same tempo. Eleven right-handed nonmusicians participated in four conditions: three familiarity decision tasks using song, lyrics, and melody and a sound type decision task (control) that was designed to engage perceptual and prelexical processing but not lexical processing. The contrasts (familiarity decision tasks versus control) showed no common areas of activation between lyrics and melody. This result indicates that essentially separate neural networks exist in semantic memory for the verbal and melodic processing of familiar songs. Verbal lexical processing recruited the left fusiform gyrus and the left inferior occipital gyrus, whereas melodic lexical processing engaged the right middle temporal sulcus and the bilateral temporo-occipital cortices. Moreover, we found that song specifically activated the left posterior inferior temporal cortex, which may serve as an interface between verbal and musical representations in order to facilitate song recognition.  相似文献   

13.
Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR). “Neural Pitch Salience” (NPS) measured from FFRs—essentially a time-domain equivalent of the classic pattern recognition models of pitch—has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code) or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception.  相似文献   

14.
Previous research has examined our ability to attend selectively to particular features of perceptual objects, as well as our ability to switch from attending to one type of feature to another. This is usually done in the context of anticipatory attentional-set control, comparing the neural mechanisms involved as participants prepare to attend to the same stimulus feature as on the previous trial (“task-stay” trials) with those required as participants prepare to attend to a different stimulus feature to that previously attended (“task-switch” trials). We wanted to establish how participants maintain or switch attentional set retrospectively, as they attend to features of objects held in visual short-term memory (VSTM). We found that switching, relative to maintaining attentional set retrospectively, was associated with a performance cost, which can be reduced over time. This control process was mirrored by a large parietal and frontal amplitude difference in the event-related brain potentials (ERPs) and significant differences in global field power (GFP) between switch and stay trials. However, when taking into account the switch/stay GFP differences, thereby controlling for this difference in amplitude, we could not distinguish these trial types topographically. By contrast, we found clear topographic differences between preparing an anticipatory feature-based attentional set versus applying it retrospectively within VSTM. These complementary topographical and amplitude analyses suggested that anticipatory and retrospective set control recruited a qualitatively different configuration of underlying neural generators. In contrast, switch/stay differences were largely quantitative, with them differing primarily in terms of amplitude rather than topography.  相似文献   

15.
Intrusive memories are a hallmark symptom of posttraumatic stress disorder (PTSD). They reflect excessive and uncontrolled retrieval of the traumatic memory. Acute elevations of cortisol are known to impair the retrieval of already stored memory information. Thus, continuous cortisol administration might help in reducing intrusive memories in PTSD. Strong perceptual priming for neutral stimuli associated with a “traumatic” context has been shown to be one important learning mechanism that leads to intrusive memories. However, the memory modulating effects of cortisol have only been shown for explicit declarative memory processes. Thus, in our double blind, placebo controlled study we aimed to investigate whether cortisol influences perceptual priming of neutral stimuli that appeared in a “traumatic” context. Two groups of healthy volunteers (N = 160) watched either neutral or “traumatic” picture stories on a computer screen. Neutral objects were presented in between the pictures. Memory for these neutral objects was tested after 24 hours with a perceptual priming task and an explicit memory task. Prior to memory testing half of the participants in each group received 25 mg of cortisol, the other half received placebo. In the placebo group participants in the “traumatic” stories condition showed more perceptual priming for the neutral objects than participants in the neutral stories condition, indicating a strong perceptual priming effect for neutral stimuli presented in a “traumatic” context. In the cortisol group this effect was not present: Participants in the neutral stories and participants in the “traumatic” stories condition in the cortisol group showed comparable priming effects for the neutral objects. Our findings show that cortisol inhibits perceptual priming for neutral stimuli that appeared in a “traumatic” context. These findings indicate that cortisol influences PTSD-relevant memory processes and thus further support the idea that administration of cortisol might be an effective treatment strategy in reducing intrusive reexperiencing.  相似文献   

16.
Decoding human speech requires both perception and integration of brief, successive auditory stimuli that enter the central nervous system as well as the allocation of attention to language-relevant signals. This study assesses the role of attention on processing rapid transient stimuli in adults and children. Cortical responses (EEG/ERPs), specifically mismatch negativity (MMN) responses, to paired tones (standard 100–100Hz; deviant 100–300Hz) separated by a 300, 70 or 10ms silent gap (ISI) were recorded under Ignore and Attend conditions in 21 adults and 23 children (6–11 years old). In adults, an attention-related enhancement was found for all rate conditions and laterality effects (L>R) were observed. In children, 2 auditory discrimination-related peaks were identified from the difference wave (deviant-standard): an early peak (eMMN) at about 100–300ms indexing sensory processing, and a later peak (LDN), at about 400–600ms, thought to reflect reorientation to the deviant stimuli or “second-look” processing. Results revealed differing patterns of activation and attention modulation for the eMMN in children as compared to the MMN in adults: The eMMN had a more frontal topography as compared to adults and attention played a significantly greater role in childrens’ rate processing. The pattern of findings for the LDN was consistent with hypothesized mechanisms related to further processing of complex stimuli. The differences between eMMN and LDN observed here support the premise that separate cognitive processes and mechanisms underlie these ERP peaks. These findings are the first to show that the eMMN and LDN differ under different temporal and attentional conditions, and that a more complete understanding of children’s responses to rapid successive auditory stimulation requires an examination of both peaks.  相似文献   

17.
Cognitive theories in visual attention and perception, categorization, and memory often critically rely on concepts of similarity among objects, and empirically require measures of “sameness” among their stimuli. For instance, a researcher may require similarity estimates among multiple exemplars of a target category in visual search, or targets and lures in recognition memory. Quantifying similarity, however, is challenging when everyday items are the desired stimulus set, particularly when researchers require several different pictures from the same category. In this article, we document a new multidimensional scaling database with similarity ratings for 240 categories, each containing color photographs of 16–17 exemplar objects. We collected similarity ratings using the spatial arrangement method. Reports include: the multidimensional scaling solutions for each category, up to five dimensions, stress and fit measures, coordinate locations for each stimulus, and two new classifications. For each picture, we categorized the item''s prototypicality, indexed by its proximity to other items in the space. We also classified pairs of images along a continuum of similarity, by assessing the overall arrangement of each MDS space. These similarity ratings will be useful to any researcher that wishes to control the similarity of experimental stimuli according to an objective quantification of “sameness.”  相似文献   

18.
Social cues modulate the performance of communicative behaviors in a range of species, including humans, and such changes can make the communication signal more salient. In songbirds, males use song to attract females, and song organization can differ depending on the audience to which a male sings. For example, male zebra finches (Taeniopygia guttata) change their songs in subtle ways when singing to a female (directed song) compared with when they sing in isolation (undirected song), and some of these changes depend on altered neural activity from a specialized forebrain-basal ganglia circuit, the anterior forebrain pathway (AFP). In particular, variable activity in the AFP during undirected song is thought to actively enable syllable variability, whereas the lower and less-variable AFP firing during directed singing is associated with more stereotyped song. Consequently, directed song has been suggested to reflect a “performance” state, and undirected song a form of vocal motor “exploration.” However, this hypothesis predicts that directed–undirected song differences, despite their subtlety, should matter to female zebra finches, which is a question that has not been investigated. We tested female preferences for this natural variation in song in a behavioral approach assay, and we found that both mated and socially naive females could discriminate between directed and undirected song—and strongly preferred directed song. These preferences, which appeared to reflect attention especially to aspects of song variability controlled by the AFP, were enhanced by experience, as they were strongest for mated females responding to their mate's directed songs. We then measured neural activity using expression of the immediate early gene product ZENK, and found that social context and song familiarity differentially modulated the number of ZENK-expressing cells in telencephalic auditory areas. Specifically, the number of ZENK-expressing cells in the caudomedial mesopallium (CMM) was most affected by whether a song was directed or undirected, whereas the caudomedial nidopallium (NCM) was most affected by whether a song was familiar or unfamiliar. Together these data demonstrate that females detect and prefer the features of directed song and suggest that high-level auditory areas including the CMM are involved in this social perception.  相似文献   

19.
This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( =  conditioned stimuli, CS) that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral) or monetary outcomes (+50 euro cents, −50 cents, 0 cents). In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.  相似文献   

20.
Chronic pain, including chronic non-specific low back pain (CNSLBP), is often associated with body perception disturbances, but these have generally been assessed under static conditions. The objective of this study was to use a “virtual mirror” that scaled visual movement feedback to assess body perception during active movement in military personnel with CNSLBP (n = 15) as compared to military healthy control subjects (n = 15). Subjects performed a trunk flexion task while sitting and standing in front of a large screen displaying a full-body virtual mirror-image (avatar) in real-time. Avatar movements were scaled to appear greater, identical, or smaller than the subjects’ actual movements. A total of 126 trials with 11 different scaling factors were pseudo-randomized across 6 blocks. After each trial, subjects had to decide whether the avatar’s movements were “greater” or “smaller” than their own movements. Based on this two-alternative forced choice paradigm, a psychophysical curve was fitted to the data for each subject, and several metrics were derived from this curve. In addition, task adherence (kinematics) and virtual reality immersion were assessed. Groups displayed a similar ability to discriminate between different levels of movement scaling. Still, subjects with CNSLBP showed an abnormal performance and tended to overestimate their own movements (a right-shifted psychophysical curve). Subjects showed adequate task adherence, and on average virtual reality immersion was reported to be very good. In conclusion, these results extend previous work in patients with CNSLBP, and denote an important relationship between body perception, movement and pain. As such, the assessment of body perception during active movement can offer new avenues for understanding and managing body perception disturbances and abnormal movement patterns in patients with pain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号