首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.

Background

Age-related macular degeneration (AMD) is a leading cause of legal blindness in the elderly in the industrialized word. While the immune system in the retina is likely to be important in AMD pathogenesis, the cell biology underlying the disease is incompletely understood. Clinical and basic science studies have implicated alterations in the retinal pigment epithelium (RPE) layer as a locus of early change. Also, retinal microglia, the resident immune cells of the retina, have been observed to translocate from their normal position in the inner retina to accumulate in the subretinal space close to the RPE layer in AMD eyes and in animal models of AMD.

Methodology/Principal Findings

In this study, we examined the effects of retinal microglia on RPE cells using 1) an in vitro model where activated retinal microglia are co-cultured with primary RPE cells, and 2) an in vivo mouse model where retinal microglia are transplanted into the subretinal space. We found that retinal microglia induced in RPE cells 1) changes in RPE structure and distribution, 2) increased expression and secretion of pro-inflammatory, chemotactic, and pro-angiogenic molecules, and 3) increased extent of in vivo choroidal neovascularization in the subretinal space.

Conclusions/Significance

These findings share similarities with important pathological features found in AMD and suggest the relevance of microglia-RPE interactions in AMD pathogenesis. We speculate that the migration of retinal microglia into the subretinal space in early stages of the disease induces significant changes in RPE cells that perpetuate further microglial accumulation, increase inflammation in the outer retina, and fosters an environment conducive for the formation of neovascular changes responsible for much of vision loss in advanced AMD.  相似文献   

2.
Medial temporal lobe structures including the hippocampus are implicated by separate investigations in both episodic memory and spatial function. We show that a single recurrent attractor network can store both the discrete memories that characterize episodic memory and the continuous representations that characterize physical space. Combining both types of representation in a single network is actually necessary if objects and where they are located in space must be stored. We thus show that episodic memory and spatial theories of medial temporal lobe function can be combined in a unified model.  相似文献   

3.
An assumption inherent in many models of visual space is that the spatial coordinates of retinal cells implicitly give rise to the perceptual code for position. The results of the experiments reported here, in which it is shown that retinally non-veridical locations of contour elements are used by the visual system for contour-element binding, lend support to a different view. The visual system does not implicitly code position with reference to the labelled locations of retinal cells, but dynamically extracts spatial position from the aggregate result of local computations. These computations may include local spatial relationships between retinal cells, but are not confined to them; other computations, including position derived from local velocity cues, are combined to code the position of objects in the visual world.  相似文献   

4.
 The spatial distortion hypothesis is one of several theories that explain certain aspects of neglect in patients with right parietal lesions. To determine whether a distorted representation of space can account for the performance of neglect patients in different visuospatial tasks, we asked 26 neglect patients to: (1) bisect horizontal lines and (2) to compare the width of two horizontally aligned bars. A simple mathematical model compatible with the idea of a stationary distortion of represented space in egocentric coordinates explained the results of the line-bisection task. A second model that had basically the same structure and was compatible with the idea of a distorted egocentric representation based on a dynamic remapping of space approximated the size-comparison data. These results support the view that abnormalities observed in the line-bisection and size-comparison tasks are due to a distorted internal representation of the external world. Certain findings suggest that this distortion could be based on a dynamic mapping of space determined by the distribution of visuospatial attention. Received: 14 June 1999 / Accepted in revised form: 30 May 2001  相似文献   

5.
Multiple sensory-motor maps located in the brainstem and the cortex are involved in spatial orientation. Guiding movements of eyes, head, neck and arms they provide an approximately linear relation between target distance and motor response. This involves especially the superior colliculus in the brainstem and the parietal cortex. There, the natural frame of reference follows from the retinal representation of the environment. A model of navigation is presented that is based on the modulation of activity in those sensory-motor maps. The actual mechanism chosen was gain-field modulation, a process of multimodal integration that has been demonstrated in the parietal cortex and superior colliculus, and was implemented as attraction to visual cues (colour). Dependent on the metric of the sensory-motor map, the relative attraction to these cues implemented as gain field modulation and their position define a fixed point attractor on the plane for locomotive behaviour. The actual implementation used Kohonen-networks in a variant of reinforcement learning that are well suited to generate such topographically organized sensory-motor maps with roughly linear visuo-motor response characteristics. In the following, it was investigated how such an implicit coding of target positions by gain-field parameters might be represented in the hippocampus formation and under what conditions a direction-invariant space representation can arise from such retinotopic representations of multiple cues. Information about the orientation in the plane—as could be provided by head direction cells—appeared to be necessary for unambiguous space representation in our model in agreement with physiological experiments. With this information, Gauss-shaped “place-cells” could be generated, however, the representation of the spatial environment was repetitive and clustered and single cells were always tuned to the gain-field parameters as well  相似文献   

6.
The mapping of retinal space onto the striate cortex of some mammals can be approximated by a log-polar function. It has been proposed that this mapping is of functional importance for scale-and rotation-invariant pattern recognition in the visual system. An exact log-polar transform converts centered scaling and rotation into translations. A subsequent translation-invariant transform, such as the absolute value of the Fourier transform, thus generates overall size-and rotation-invariance. In our model, the translation-invariance is realized via the R-transform. This transform can be executed by simple neural networks, and it does not require the complex computations of the Fourier transform, used in Mellin-transform size-invariance models. The logarithmic space distortion and differentiation in the first processing stage of the model is realized via Mexican hat filters whose diameter increases linearly with eccentricity, similar to the characteristics of the receptive fields of retinal ganglion cells. Except for some special cases, the model can explain object recognition independent of size, orientation and position. Some general problems of Mellin-type size-invariance models-that also apply to our model-are discussed.  相似文献   

7.
Animals are able to update their knowledge about their current position solely by integrating the speed and the direction of their movement, which is known as path integration. Recent discoveries suggest that grid cells in the medial entorhinal cortex might perform some of the essential underlying computations of path integration. However, a major concern over path integration is that as the measurement of speed and direction is inaccurate, the representation of the position will become increasingly unreliable. In this paper, we study how allothetic inputs can be used to continually correct the accumulating error in the path integrator system. We set up the model of a mobile agent equipped with the entorhinal representation of idiothetic (grid cell) and allothetic (visual cells) information and simulated its place learning in a virtual environment. Due to competitive learning, a robust hippocampal place code emerges rapidly in the model. At the same time, the hippocampo-entorhinal feed-back connections are modified via Hebbian learning in order to allow hippocampal place cells to influence the attractor dynamics in the entorhinal cortex. We show that the continuous feed-back from the integrated hippocampal place representation is able to stabilize the grid cell code. This research was supported by the EU Framework 6 ICEA project (IST-4-027819-IP).  相似文献   

8.
Place cells in the hippocampus of higher mammals are critical for spatial navigation. Recent modeling clarifies how this may be achieved by how grid cells in the medial entorhinal cortex (MEC) input to place cells. Grid cells exhibit hexagonal grid firing patterns across space in multiple spatial scales along the MEC dorsoventral axis. Signals from grid cells of multiple scales combine adaptively to activate place cells that represent much larger spaces than grid cells. But how do grid cells learn to fire at multiple positions that form a hexagonal grid, and with spatial scales that increase along the dorsoventral axis? In vitro recordings of medial entorhinal layer II stellate cells have revealed subthreshold membrane potential oscillations (MPOs) whose temporal periods, and time constants of excitatory postsynaptic potentials (EPSPs), both increase along this axis. Slower (faster) subthreshold MPOs and slower (faster) EPSPs correlate with larger (smaller) grid spacings and field widths. A self-organizing map neural model explains how the anatomical gradient of grid spatial scales can be learned by cells that respond more slowly along the gradient to their inputs from stripe cells of multiple scales, which perform linear velocity path integration. The model cells also exhibit MPO frequencies that covary with their response rates. The gradient in intrinsic rhythmicity is thus not compelling evidence for oscillatory interference as a mechanism of grid cell firing. A response rate gradient combined with input stripe cells that have normalized receptive fields can reproduce all known spatial and temporal properties of grid cells along the MEC dorsoventral axis. This spatial gradient mechanism is homologous to a gradient mechanism for temporal learning in the lateral entorhinal cortex and its hippocampal projections. Spatial and temporal representations may hereby arise from homologous mechanisms, thereby embodying a mechanistic “neural relativity” that may clarify how episodic memories are learned.  相似文献   

9.
As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.  相似文献   

10.
The cortical regions involved in the different stages of speech production are relatively well-established, but their spatio-temporal dynamics remain poorly understood. In particular, the available studies have characterized neural events with respect to the onset of the stimulus triggering a verbal response. The core aspect of language production, however, is not perception but action. In this context, the most relevant question may not be how long after a stimulus brain events happen, but rather how long before the production act do they occur. We investigated speech production-related brain activity time-locked to vocal onset, in addition to the common stimulus-locked approach. We report the detailed temporal interplay between medial and left frontal activities occurring shortly before vocal onset. We interpret those as reflections of, respectively, word selection and word production processes. This medial-lateral organization is in line with that described in non-linguistic action control, suggesting that similar processes are at play in word production and non-linguistic action production. This novel view of the brain dynamics underlying word production provides a useful background for future investigations of the spatio-temporal brain dynamics that lead to the production of verbal responses.  相似文献   

11.
This work presents unified analyses of spatial and temporal visual information processing in a feed-forward network of neurons that obey membrane, or shunting equations. The feed-forward shunting network possesses properties that make it well suited for processing of static, spatial information. However, it is shown here that those same properties of the shunting network that lead to good spatial processing imply poor temporal processing characteristics. This article presents an extension of the feed-forward shunting network model that solves this problem by means of preprocessing layers. The anatomical interpretation of the resulting model is structurally analogous to recently discovered data on a retinal circuit connecting cones to retinal ganglion cells through pairs of pushpull bipolar cells. Mathematical analysis of the lumped model leads to the hypothesis that X and Y retinal ganglion cells may consist of a single mechanism acting in different parameter ranges. This hypothesis is confirmed in the companion article, wherein the model in conjunction with a nonlinear temporal adaptation mechanism — is used to reproduce experimental data of both X and Y cells by simple changes in morphological and physiological parameters.  相似文献   

12.
All known photoreceptor cells adapt to constant light stimuli, fading the retinal image when exposed to an immobile visual scene. Counter strategies are therefore necessary to prevent blindness, and in mammals this is accomplished by fixational eye movements. Cubomedusae occupy a key position for understanding the evolution of complex visual systems and their eyes are assumedly subject to the same adaptive problems as the vertebrate eye, but lack motor control of their visual system. The morphology of the visual system of cubomedusae ensures a constant orientation of the eyes and a clear division of the visual field, but thereby also a constant retinal image when exposed to stationary visual scenes. Here we show that bell contractions used for swimming in the medusae refresh the retinal image in the upper lens eye of Tripedalia cystophora. This strongly suggests that strategies comparable to fixational eye movements have evolved at the earliest metazoan stage to compensate for the intrinsic property of the photoreceptors. Since the timing and amplitude of the rhopalial movements concur with the spatial and temporal resolution of the eye it circumvents the need for post processing in the central nervous system to remove image blur.  相似文献   

13.
The topographic projection pattern formed by the retinal ganglion cell axons in the tectum of the lower vertebrate appears to require positional cues that guide the optic nerve fibers to their appropriate targets. One approach to understanding these positional cues or "positional information" has been to investigate changes in the pattern of the retinotectal projection after surgical manipulation of the embryonic eyebud. Analysis of these apparent changes in the patterns of positional information in the eye, termed "pattern regulation," may provide clues to both the nature of positional information and the mechanisms by which it is assigned to cells in the eyebud. Here we examine pattern regulation in the Xenopus visual system following the replacement of the temporal half of a right eyebud with the temporal half of a left eyebud. This manipulation requires that the left half-eyebud be inverted along its dorsoventral axis. Electrophysiological maps of these compound eyes in postmetamorphic frogs reveal regulated maps; the cells in the temporal half of the NrTl eye project to the tectum with a dorsoventral polarity appropriate for their position in the host eye and not appropriate for the original positions of the grafted cells in the donor eyebud. Paradoxically, the regulated patterns are not apparent in the projections of the original grafted eyebud cells during early larval development. Using fiber-tracing and electrophysiological mapping techniques, we now show that the regulated patterns appear gradually in the projections made by peripheral retinal cells added during mid-larval development. Because the regulation occurs relatively late in development and probably only in the peripheral retinal cells, simple models of epimorphic or morphallactic regulation do not appear to fit this system. Thus, new or more complex models must be invoked to explain the phenomenon of pattern regulation in the developing visual system of Xenopus.  相似文献   

14.
Memory for events and their spatial context: models and experiments   总被引:6,自引:0,他引:6  
The computational role of the hippocampus in memory has been characterized as: (i) an index to disparate neocortical storage sites; (ii) a time-limited store supporting neocortical long-term memory; and (iii) a content-addressable associative memory. These ideas are reviewed and related to several general aspects of episodic memory, including the differences between episodic, recognition and semantic memory, and whether hippocampal lesions differentially affect recent or remote memories. Some outstanding questions remain, such as: what characterizes episodic retrieval as opposed to other forms of read-out from memory; what triggers the storage of an event memory; and what are the neural mechanisms involved? To address these questions a neural-level model of the medial temporal and parietal roles in retrieval of the spatial context of an event is presented. This model combines the idea that retrieval of the rich context of real-life events is a central characteristic of episodic memory, and the idea that medial temporal allocentric representations are used in long-term storage while parietal egocentric representations are used to imagine, manipulate and re-experience the products of retrieval. The model is consistent with the known neural representation of spatial information in the brain, and provides an explanation for the involvement of Papez''s circuit in both the representation of heading direction and in the recollection of episodic information. Two experiments relating to the model are briefly described. A functional neuroimaging study of memory for the spatial context of life-like events in virtual reality provides support for the model''s functional localization. A neuropsychological experiment suggests that the hippocampus does store an allocentric representation of spatial locations.  相似文献   

15.
In the mammalian visual system, retinal axons undergo temporal and spatial rearrangements as they project bilaterally to targets on the brain. Retinal axons cross the neuraxis to form the optic chiasm on the hypothalamus in a position defined by overlapping domains of regulatory gene expression. However, the downstream molecules that direct these processes remain largely unknown. Here we use a novel in vitro paradigm to study possible roles of the Eph family of receptor tyrosine kinases in chiasm formation. In vivo, Eph receptors and their ligands distribute in complex patterns in the retina and hypothalamus. In vitro, retinal axons are inhibited by reaggregates of isolated hypothalamic, but not dorsal diencephalic or cerebellar cells. Furthermore, temporal retinal neurites are more inhibited than nasal neurites by hypothalamic cells. Addition of soluble EphA5-Fc to block Eph "A" subclass interactions decreases both the inhibition and the differential response of retinal neurites by hypothalamic reaggregates. These data show that isolated hypothalamic cells elicit specific, position-dependent inhibitory responses from retinal neurites in culture. Moreover, these responses are mediated, in part, by Eph interactions. Together with the in vivo distributions, these data suggest possible roles for Eph family members in directing retinal axon growth and/or reorganization during optic chiasm formation.  相似文献   

16.
An important task of the brain is to represent the outside world. It is unclear how the brain may do this, however, as it can only rely on neural responses and has no independent access to external stimuli in order to “decode” what those responses mean. We investigate what can be learned about a space of stimuli using only the action potentials (spikes) of cells with stereotyped—but unknown—receptive fields. Using hippocampal place cells as a model system, we show that one can (1) extract global features of the environment and (2) construct an accurate representation of space, up to an overall scale factor, that can be used to track the animal's position. Unlike previous approaches to reconstructing position from place cell activity, this information is derived without knowing place fields or any other functions relating neural responses to position. We find that simply knowing which groups of cells fire together reveals a surprising amount of structure in the underlying stimulus space; this may enable the brain to construct its own internal representations.  相似文献   

17.
We studied the linear and nonlinear temporal response properties of simple cells in cat visual cortex by presenting at single positions in the receptive field an optimally oriented bar stimulus whose luminance was modulated in a random, binary fashion. By crosscorrelating a cell's response with the input it was possible to obtain the zeroth-, first-, and second-order Wiener kernels at each RF location. Simple cells showed pronounced nonlinear temporal properties as revealed by the presence of prominent second-order kernels. A more conventional type of response histogram was also calculated by time-locking a histogram on the occurrence of the desired stimulus in the random sequence. A comparison of the time course of this time-locked response with that of the kernel prediction indicated that nonlinear temporal effects of order higher than two are unimportant. The temporal properties of simple cells were well represented by a cascade model composed of a linear filter followed by a static nonlinearity. These modelling results suggested that for simple cells, the nonlinearity occurs late and probably is a soft threshold associated with the spike generating mechanism of the cortical cell itself. This result is surprising in view of the known threshold nonlinearities in preceding lateral geniculate and retinal neurons. It suggests that geniculocortical connectivity cancels the earlier nonlinearities to create a highly linear representation inside cortical simple cells.This work comprises a portion of a PhD thesis submitted by the first author. This study was supported in part by NIH Grant EY04630 and EY06679 to R.C.E., and EY01319 (Core Grant) to the Center for Visual Science  相似文献   

18.
The aim of the present study was to test the functional relevance of the spatial concepts UP or DOWN for words that use these concepts either literally (space) or metaphorically (time, valence). A functional relevance would imply a symmetrical relationship between the spatial concepts and words related to these concepts, showing that processing words activate the related spatial concepts on one hand, but also that an activation of the concepts will ease the retrieval of a related word on the other. For the latter, the rotation angle of participant’s body position was manipulated either to an upright or a head-down tilted body position to activate the related spatial concept. Afterwards participants produced in a within-subject design previously memorized words of the concepts space, time and valence according to the pace of a metronome. All words were related either to the spatial concept UP or DOWN. The results including Bayesian analyses show (1) a significant interaction between body position and words using the concepts UP and DOWN literally, (2) a marginal significant interaction between body position and temporal words and (3) no effect between body position and valence words. However, post-hoc analyses suggest no difference between experiments. Thus, the authors concluded that integrating sensorimotor experiences is indeed of functional relevance for all three concepts of space, time and valence. However, the strength of this functional relevance depends on how close words are linked to mental concepts representing vertical space.  相似文献   

19.
Saccadic eye movements and fixations are the behavioral means by which we visually sample text during reading. Human oculomotor control is governed by a complex neurophysiological system involving the brain stem, superior colliculus, and several cortical areas. A very widely held belief among researchers investigating primate vision is that the oculomotor system serves to orient the visual axes of both eyes to fixate the same target point in space. It is argued that such precise positioning of the eyes is necessary to place images on corresponding retinal locations, such that on each fixation a single, nondiplopic, visual representation is perceived. Vision works actively through a continual sampling process involving saccades and fixations. Here we report that during normal reading, the eyes do not always fixate the same letter within a word. We also demonstrate that saccadic targeting is yoked and based on a unified cyclopean percept of a whole word since it is unaffected if different word parts are delivered exclusively to each eye via a dichoptic presentation technique. These two findings together suggest that the visual signal from each eye is fused at a very early stage in the visual pathway, even when the fixation disparity is greater than one character (0.29 deg), and that saccade metrics for each eye are computed on the basis of that fused signal.  相似文献   

20.
Axonal growth cones originating from explants of embryonic chick retina were simultaneously exposed to two different cell monolayers and their preference for particular monolayers as a substrate for growth was determined. These experiments show that: (1) nasal retinal axons can distinguish between retinal and tectal cells; (2) temporal retinal axons can distinguish between tectal cells that originated from different positions within the tectum along the antero-posterior axis; (3) axons originating from nasal parts of the retina have different recognizing capabilities from temporal axons; (4) the property of the tectal cells, which is attractive for temporal axons, has a graded distribution along the antero-posterior axis of the tectum; and (5) this gradient also exists in non-innervated tecta.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号