首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ConfocalVR is a virtual reality (VR) application created to improve the ability of researchers to study the complexity of cell architecture. Confocal microscopes take pictures of fluorescently labeled proteins or molecules at different focal planes to create a stack of two-dimensional images throughout the specimen. Current software applications reconstruct the three-dimensional (3D) image and render it as a two-dimensional projection onto a computer screen where users need to rotate the image to expose the full 3D structure. This process is mentally taxing, breaks down if you stop the rotation, and does not take advantage of the eye's full field of view. ConfocalVR exploits consumer-grade VR systems to fully immerse the user in the 3D cellular image. In this virtual environment, the user can (1) adjust image viewing parameters without leaving the virtual space, (2) reach out and grab the image to quickly rotate and scale the image to focus on key features, and (3) interact with other users in a shared virtual space enabling real-time collaborative exploration and discussion. We found that immersive VR technology allows the user to rapidly understand cellular architecture and protein or molecule distribution. We note that it is impossible to understand the value of immersive visualization without experiencing it first hand, so we encourage readers to get access to a VR system, download this software, and evaluate it for yourself. The ConfocalVR software is available for download at http://www.confocalvr.com, and is free for nonprofits.  相似文献   

2.
The primary visual cortex (V1) is pre-wired to facilitate the extraction of behaviorally important visual features. Collinear edge detectors in V1, for instance, mutually enhance each other to improve the perception of lines against a noisy background. The same pre-wiring that facilitates line extraction, however, is detrimental when subjects have to discriminate the brightness of different line segments. How is it possible to improve in one task by unsupervised practicing, without getting worse in the other task? The classical view of perceptual learning is that practicing modulates the feedforward input stream through synaptic modifications onto or within V1. However, any rewiring of V1 would deteriorate other perceptual abilities different from the trained one. We propose a general neuronal model showing that perceptual learning can modulate top-down input to V1 in a task-specific way while feedforward and lateral pathways remain intact. Consistent with biological data, the model explains how context-dependent brightness discrimination is improved by a top-down recruitment of recurrent inhibition and a top-down induced increase of the neuronal gain within V1. Both the top-down modulation of inhibition and of neuronal gain are suggested to be universal features of cortical microcircuits which enable perceptual learning.  相似文献   

3.
We report a versatile approach for covalent surface-assembly of proteins onto selected electrode patterns of pre-fabricated devices. Our approach is based on electro-assembly of the aminopolysaccharide chitosan scaffold as a stable thin film onto patterned conductive surfaces of the device, which is followed by covalent assembly of the target protein onto the scaffold surface upon enzymatic activation of the protein's "pro-tag." For our demonstration, the model target protein is green fluorescent protein (GFP) genetically fused with a pentatyrosine pro-tag at its C-terminus, which assembles onto both two-dimensional chips and within fully packaged microfluidic devices in situ and under flow. Our surface-assembly approach enables spatial selectivity and orientational control under mild experimental conditions. We believe that our integrated approach harnessing genetic manipulation, in situ enzymatic activation, and electro-assembly makes it advantageous for a wide variety of bioMEMS and biosensing applications that require facile "biofunctionalization" of microfabricated devices.  相似文献   

4.
Visual attention appears to modulate cortical neurodynamics and synchronization through various cholinergic mechanisms. In order to study these mechanisms, we have developed a neural network model of visual cortex area V4, based on psychophysical, anatomical and physiological data. With this model, we want to link selective visual information processing to neural circuits within V4, bottom-up sensory input pathways, top-down attention input pathways, and to cholinergic modulation from the prefrontal lobe. We investigate cellular and network mechanisms underlying some recent analytical results from visual attention experimental data. Our model can reproduce the experimental findings that attention to a stimulus causes increased gamma-frequency synchronization in the superficial layers. Computer simulations and STA power analysis also demonstrate different effects of the different cholinergic attention modulation action mechanisms.  相似文献   

5.
Hierarchical generative models, such as Bayesian networks, and belief propagation have been shown to provide a theoretical framework that can account for perceptual processes, including feedforward recognition and feedback modulation. The framework explains both psychophysical and physiological experimental data and maps well onto the hierarchical distributed cortical anatomy. However, the complexity required to model cortical processes makes inference, even using approximate methods, very computationally expensive. Thus, existing object perception models based on this approach are typically limited to tree-structured networks with no loops, use small toy examples or fail to account for certain perceptual aspects such as invariance to transformations or feedback reconstruction. In this study we develop a Bayesian network with an architecture similar to that of HMAX, a biologically-inspired hierarchical model of object recognition, and use loopy belief propagation to approximate the model operations (selectivity and invariance). Crucially, the resulting Bayesian network extends the functionality of HMAX by including top-down recursive feedback. Thus, the proposed model not only achieves successful feedforward recognition invariant to noise, occlusions, and changes in position and size, but is also able to reproduce modulatory effects such as illusory contour completion and attention. Our novel and rigorous methodology covers key aspects such as learning using a layerwise greedy algorithm, combining feedback information from multiple parents and reducing the number of operations required. Overall, this work extends an established model of object recognition to include high-level feedback modulation, based on state-of-the-art probabilistic approaches. The methodology employed, consistent with evidence from the visual cortex, can be potentially generalized to build models of hierarchical perceptual organization that include top-down and bottom-up interactions, for example, in other sensory modalities.  相似文献   

6.
Fully immersive and stereoscopic Virtual Environments (VE) represent a powerful multimedia tool for laboratory-based simulations of distinct scenarios including scenarios for evaluating stressful situations resembling reality. Thus far, cortisol secretion as a neuroendocrine parameter of stress has not been evaluated within a Virtual Reality (VR)-based paradigm. In this study 94 healthy volunteers were subjected to a provocative VR-paradigm and a cognitive stress task. Provocative in this context means the VE was designed to provoke physiological reactions (cortisol secretion) within the respective users by purpose. It was tested (a) if a fully dynamic VE as opposed to a static VE can be regarded as a stressor and (b) if such a fully dynamic VE can modify an additional response to a cognitive stressor presented within the VE additionally. Furthermore, possible gender-related impacts on cortisol responses were assessed. A significant cortisol increase was observed only after the combined application of the fully dynamic VE and the cognitive stressor, but not after application of the dynamic VE or the cognitive stressor alone. Cortisol reactivity was greater for men than for women. We conclude that a fully dynamic VE does not affect cortisol secretion per se, but increases cortisol responses to a dual task paradigm that includes performance of a stressful mental task. This provides the basis for the application of VR-based technologies in neuroscientific research, including the assessment of the human Hypothalamus-Pituitary-Adrenal (HPA) axis regulation.  相似文献   

7.
Visualization of scientific data is crucial not only for scientific discovery but also to communicate science and medicine to both experts and a general audience. Until recently, we have been limited to visualizing the three‐dimensional (3D) world of biology in 2 dimensions. Renderings of 3D cells are still traditionally displayed using two‐dimensional (2D) media, such as on a computer screen or paper. However, the advent of consumer grade virtual reality (VR) headsets such as Oculus Rift and HTC Vive means it is now possible to visualize and interact with scientific data in a 3D virtual world. In addition, new microscopic methods provide an unprecedented opportunity to obtain new 3D data sets. In this perspective article, we highlight how we have used cutting edge imaging techniques to build a 3D virtual model of a cell from serial block‐face scanning electron microscope (SBEM) imaging data. This model allows scientists, students and members of the public to explore and interact with a “real” cell. Early testing of this immersive environment indicates a significant improvement in students’ understanding of cellular processes and points to a new future of learning and public engagement. In addition, we speculate that VR can become a new tool for researchers studying cellular architecture and processes by populating VR models with molecular data.   相似文献   

8.
9.
Blinks profoundly interrupt visual input but are rarely noticed, perhaps because of blink suppression, a visual-sensitivity loss that begins immediately prior to blink onset. Blink suppression is thought to result from an extra-retinal signal that is associated with the blink motor command and may act to attenuate the sensory consequences of the motor action. However, the neural mechanisms underlying this phenomenon remain unclear. They are challenging to study because any brain-activity changes resulting from an extra-retinal signal associated with the blink motor command are potentially masked by profound neural-activity changes caused by the retinal-illumination reduction that results from occlusion of the pupil by the eyelid. Here, we distinguished direct top-down effects of blink-associated motor signals on cortical activity from purely mechanical or optical effects of blinking on visual input by combining pupil-independent retinal stimulation with functional MRI (fMRI) in humans. Even though retinal illumination was kept constant during blinks, we found that blinking nevertheless suppressed activity in visual cortex and in areas of parietal and prefrontal cortex previously associated with awareness of environmental change. Our findings demonstrate active top-down modulation of visual processing during blinking, suggesting a possible mechanism by which blinks go unnoticed.  相似文献   

10.
A challenging goal for cognitive neuroscience researchers is to determine how mental representations are mapped onto the patterns of neural activity. To address this problem, functional magnetic resonance imaging (fMRI) researchers have developed a large number of encoding and decoding methods. However, previous studies typically used rather limited stimuli representation, like semantic labels and Wavelet Gabor filters, and largely focused on voxel-based brain patterns. Here, we present a new fMRI encoding model to predict the human brain’s responses to free viewing of video clips which aims to deal with this limitation. In this model, we represent the stimuli using a variety of representative visual features in the computer vision community, which can describe the global color distribution, local shape and spatial information and motion information contained in videos, and apply the functional connectivity to model the brain’s activity pattern evoked by these video clips. Our experimental results demonstrate that brain network responses during free viewing of videos can be robustly and accurately predicted across subjects by using visual features. Our study suggests the feasibility of exploring cognitive neuroscience studies by computational image/video analysis and provides a novel concept of using the brain encoding as a test-bed for evaluating visual feature extraction.  相似文献   

11.
Our ability to process visual information is fundamentally limited. This leads to competition between sensory information that is relevant for top-down goals and sensory information that is perceptually salient, but task-irrelevant. The aim of the present study was to identify, from EEG recordings, pre-stimulus and pre-saccadic neural activity that could predict whether top-down or bottom-up processes would win the competition for attention on a trial-by-trial basis. We employed a visual search paradigm in which a lateralized low contrast target appeared alone, or with a low (i.e., non-salient) or high contrast (i.e., salient) distractor. Trials with a salient distractor were of primary interest due to the strong competition between top-down knowledge and bottom-up attentional capture. Our results demonstrated that 1) in the 1-sec pre-stimulus interval, frontal alpha (8-12 Hz) activity was higher on trials where the salient distractor captured attention and the first saccade (bottom-up win); and 2) there was a transient pre-saccadic increase in posterior-parietal alpha (7-8 Hz) activity on trials where the first saccade went to the target (top-down win). We propose that the high frontal alpha reflects a disengagement of attentional control whereas the transient posterior alpha time-locked to the saccade indicates sensory inhibition of the salient distractor and suppression of bottom-up oculomotor capture.  相似文献   

12.
A majority of cortical areas are connected via feedforward and feedback fiber projections. In feedforward pathways we mainly observe stages of feature detection and integration. The computational role of the descending pathways at different stages of processing remains mainly unknown. Based on empirical findings we suggest that the top-down feedback pathways subserve a context-dependent gain control mechanism. We propose a new computational model for recurrent contour processing in which normalized activities of orientation selective contrast cells are fed forward to the next processing stage. There, the arrangement of input activation is matched against local patterns of contour shape. The resulting activities are subsequently fed back to the previous stage to locally enhance those initial measurements that are consistent with the top-down generated responses. In all, we suggest a computational theory for recurrent processing in the visual cortex in which the significance of local measurements is evaluated on the basis of a broader visual context that is represented in terms of contour code patterns. The model serves as a framework to link physiological with perceptual data gathered in psychophysical experiments. It handles a variety of perceptual phenomena, such as the local grouping of fragmented shape outline, texture surround and density effects, and the interpolation of illusory contours. Received: 28 October 1998 / Accepted in revised form: 19 March 1999  相似文献   

13.
Summary: PGAGENE is a web-based gene-specific genomic data search engine, which allows users to search over 5.9 million pieces of collective genetic and genomic data from the NHLBI supported Programs for Genomic Applications. This data includes microarray measurements, SNPs, and mutations, and data may be found using symbols, parts of gene names or products, Affymetrix probe IDs, GenBank accession numbers, UniGene IDs, dbSNP IDs, and others. The PGAGENE indexing agent periodically maps all publicly available gene-specific PGA data onto LocusLink using dynamically generated cross-referencing tables.  相似文献   

14.
视觉图像辨认眼动中的Top-down信息处理   总被引:2,自引:0,他引:2  
在视觉图像辨认过程中,眼球不是均匀地扫描全幅图像,而是通过一系列快速的眼球跳动来改变注视点位置,有选择地通过注视停顿来采集图象中的关键信息。通过实验对不同图像刺激时的眼动轨迹进行记录与分析,发现:(1)对于简单的几何图形,眼动注视停顿主要集中在图像中几何特征之处,亦即与周围不同的奇异点上;(2)对复杂图象刺激,眼动注视点位置决定于受试者的已有概念模型及其兴趣所在;(3)对中文单字进行辩认时,其眼动模式也是取决于受试者对该单字的知识(也即概念模型)。以上结果提示,视觉图象辨认主要是通过自上而下(top-down)的信息处理方式才完成.由中枢控制眼球运动,将注视点落到中枢决定的图形奇点上来,通过注视停顿对中枢认为的关键信息之处进行抽提,以实现辨认。这种处理方式不是只取决于输入的图像信息,也不必对目标图像的每个象素进行处理,而只需对图象中少量的关键信息部位进行重点的检测和处理,从而提高了图象信息处理的能力及效率。  相似文献   

15.
视觉图像辨认眼动中的Top-down信息处理   总被引:2,自引:0,他引:2  
在视觉图像辨认过程中,眼球不是均匀地扫描全幅图像,而是通过一系列快速的眼球跳动来改变注视点位置,有选择地通过注视停顿来采集图象中的关键信息。通过实验对不同图像刺激时的眼动轨迹进行记录与分析,发现:(1)对于简单的几何图形,眼动注视停顿主要集中在图像中几何特征之处,亦即与周围不同的奇异点上;(2)对复杂图象刺激,眼动注视点位置决定于受试者的已有概念模型及其兴趣所在;(3)对中文单字进行辩认时,其眼动模式也是取决于受试者对该单字的知识(也即概念模型)。以上结果提示,视觉图象辨认主要是通过自上而下(top-down)的信息处理方式才完成.由中枢控制眼球运动,将注视点落到中枢决定的图形奇点上来,通过注视停顿对中枢认为的关键信息之处进行抽提,以实现辨认。这种处理方式不是只取决于输入的图像信息,也不必对目标图像的每个象素进行处理,而只需对图象中少量的关键信息部位进行重点的检测和处理,从而提高了图象信息处理的能力及效率。  相似文献   

16.
We present a comprehensive mass spectrometric approach that integrates intact protein molecular mass measurement ("top-down") and proteolytic fragment identification ("bottom-up") to characterize the 70S ribosome from Rhodopseudomonas palustris. Forty-two intact protein identifications were obtained by the top-down approach and 53 out of the 54 orthologs to Escherichia coli ribosomal proteins were identified from bottom-up analysis. This integrated approach simplified the assignment of post-translational modifications by increasing the confidence of identifications, distinguishing between isoforms, and identifying the amino acid positions at which particular post-translational modifications occurred. Our combined mass spectrometry data also allowed us to check and validate the gene annotations for three ribosomal proteins predicted to possess extended C-termini. In particular, we identified a highly repetitive C-terminal "alanine tail" on L25. This type of low complexity sequence, common to eukaryotic proteins, has previously not been reported in prokaryotic proteins. To our knowledge, this is the most comprehensive protein complex analysis to date that integrates two MS techniques.  相似文献   

17.
Head direction (HD) cell responses are thought to be derived from a combination of internal (or idiothetic) and external (or allothetic) sources of information. Recent work from the Jeffery laboratory shows that the relative influence of visual versus vestibular inputs upon the HD cell response depends on the disparity between these sources. In this paper, we present simulation results from a model designed to explain these observations. The model accurately replicates the Knight et al. data. We suggest that cue conflict resolution is critically dependent on plastic remapping of visual information onto the HD cell layer. This remap results in a shift in preferred directions of a subset of HD cells, which is then inherited by the rest of the cells during path integration. Thus, we demonstrate how, over a period of several minutes, a visual landmark may gain cue control. Furthermore, simulation results show that weaker visual landmarks fail to gain cue control as readily. We therefore suggest a second longer term plasticity in visual projections onto HD cell areas, through which landmarks with an inconsistent relationship to idiothetic information are made less salient, significantly hindering their ability to gain cue control. Our results provide a mechanism for reliability-weighted cue averaging that may pertain to other neural systems in addition to the HD system.  相似文献   

18.
1. Much of the current understanding of ecological systems is based on theory that does not explicitly take into account individual variation within natural populations. However, individuals may show substantial variation in resource use. This variation in turn may be translated into topological properties of networks that depict interactions among individuals and the food resources they consume (individual-resource networks). 2. Different models derived from optimal diet theory (ODT) predict highly distinct patterns of trophic interactions at the individual level that should translate into distinct network topologies. As a consequence, individual-resource networks can be useful tools in revealing the incidence of different patterns of resource use by individuals and suggesting their mechanistic basis. 3. In the present study, using data from several dietary studies, we assembled individual-resource networks of 10 vertebrate species, previously reported to show interindividual diet variation, and used a network-based approach to investigate their structure. 4. We found significant nestedness, but no modularity, in all empirical networks, indicating that (i) these populations are composed of both opportunistic and selective individuals and (ii) the diets of the latter are ordered as predictable subsets of the diets of the more opportunistic individuals. 5. Nested patterns are a common feature of species networks, and our results extend its generality to trophic interactions at the individual level. This pattern is consistent with a recently proposed ODT model, in which individuals show similar rank preferences but differ in their acceptance rate for alternative resources. Our findings therefore suggest a common mechanism underlying interindividual variation in resource use in disparate taxa.  相似文献   

19.
Hamker FH 《Bio Systems》2006,86(1-3):91-99
Vision is a crucial sensor. It provides a very rich collection of information about our environment. The difficulty in vision arises, since this information is not obvious in the image, it has to be constructed. Wheres earlier approaches have favored a bottom-up approach, which maps the image onto an internal representation of the world, more recent approaches search for alternatives and develop frameworks which make use of top-down connections. In these approaches vision is inherently a constructive process which makes use of a priory information. Following this line of research, a model of primate object perception is presented and used to simulate an object detection task in natural scenes. The model predicts that early responses in extrastriate visual areas are modulated by the visual goal.  相似文献   

20.
《Bio Systems》2007,87(1-3):91-99
Vision is a crucial sensor. It provides a very rich collection of information about our environment. The difficulty in vision arises, since this information is not obvious in the image, it has to be constructed. Wheres earlier approaches have favored a bottom-up approach, which maps the image onto an internal representation of the world, more recent approaches search for alternatives and develop frameworks which make use of top-down connections. In these approaches vision is inherently a constructive process which makes use of a priory information. Following this line of research, a model of primate object perception is presented and used to simulate an object detection task in natural scenes. The model predicts that early responses in extrastriate visual areas are modulated by the visual goal.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号