首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Image registration has been used to support pixel-level data analysis on pedobarographic image data sets. Some registration methods have focused on robustness and sacrificed speed, but a recent approach based on external contours offered both high computational processing speed and high accuracy. However, since contours can be influenced by local perturbations, we sought more global methods. Thus, we propose two new registration methods based on the Fourier transform, cross-correlation and phase correlation which offer high computational speed. We found out that both proposed methods revealed high accuracy for the similarity measures considered, using control geometric transformations. Additionally, both methods revealed high computational processing speed which, combined with their accuracy and robustness, allows their implementation in near-real-time applications. Furthermore, we found that the current methods were robust to moderate levels of noise, and consequently, do not require noise removal procedure like the contours method does.  相似文献   

3.
After the progress made during the genomics era, bioinformatics was tasked with supporting the flow of information generated by nanobiotechnology efforts. This challenge requires adapting classical bioinformatic and computational chemistry tools to store, standardize, analyze, and visualize nanobiotechnological information. Thus, old and new bioinformatic and computational chemistry tools have been merged into a new sub-discipline: nanoinformatics. This review takes a second look at the development of this new and exciting area as seen from the perspective of the evolution of nanobiotechnology applied to the life sciences. The knowledge obtained at the nano-scale level implies answers to new questions and the development of new concepts in different fields. The rapid convergence of technologies around nanobiotechnologies has spun off collaborative networks and web platforms created for sharing and discussing the knowledge generated in nanobiotechnology. The implementation of new database schemes suitable for storage, processing and integrating physical, chemical, and biological properties of nanoparticles will be a key element in achieving the promises in this convergent field. In this work, we will review some applications of nanobiotechnology to life sciences in generating new requirements for diverse scientific fields, such as bioinformatics and computational chemistry.  相似文献   

4.
N G Rambidi 《Bio Systems》1992,27(4):219-222
A new version of computing and information processing devices may result from major principles of information processing at molecular level. Non-discrete biomolecular computers based on these principles seems to be capable of solving problems of high computational complexity. One of the possible ways to implement these devices is based on biochemical non-linear dynamical systems. Means and ways to materialize biomolecular computers are discussed.  相似文献   

5.
A majority of cortical areas are connected via feedforward and feedback fiber projections. In feedforward pathways we mainly observe stages of feature detection and integration. The computational role of the descending pathways at different stages of processing remains mainly unknown. Based on empirical findings we suggest that the top-down feedback pathways subserve a context-dependent gain control mechanism. We propose a new computational model for recurrent contour processing in which normalized activities of orientation selective contrast cells are fed forward to the next processing stage. There, the arrangement of input activation is matched against local patterns of contour shape. The resulting activities are subsequently fed back to the previous stage to locally enhance those initial measurements that are consistent with the top-down generated responses. In all, we suggest a computational theory for recurrent processing in the visual cortex in which the significance of local measurements is evaluated on the basis of a broader visual context that is represented in terms of contour code patterns. The model serves as a framework to link physiological with perceptual data gathered in psychophysical experiments. It handles a variety of perceptual phenomena, such as the local grouping of fragmented shape outline, texture surround and density effects, and the interpolation of illusory contours. Received: 28 October 1998 / Accepted in revised form: 19 March 1999  相似文献   

6.
Modelling the dynamics of biosystems   总被引:3,自引:0,他引:3  
The need for a more formal handling of biological information processing with stochastic and mobile process algebras is addressed. Biology can benefit this approach, yielding a better understanding of behavioural properties of cells, and computer science can benefit this approach, obtaining new computational models inspired by nature.  相似文献   

7.
In cases where ultra-flat cryo-preparations of well-ordered two-dimensional (2D) crystals are available, electron crystallography is a powerful method for the determination of the high-resolution structures of membrane and soluble proteins. However, crystal unbending and Fourier-filtering methods in electron crystallography three-dimensional (3D) image processing are generally limited in their performance for 2D crystals that are badly ordered or non-flat. Here we present a single particle image processing approach, which is implemented as an extension of the 2D crystallographic pipeline realized in the 2dx software package, for the determination of high-resolution 3D structures of membrane proteins. The algorithm presented, addresses the low single-to-noise ratio (SNR) of 2D crystal images by exploiting neighborhood correlation between adjacent proteins in the 2D crystal. Compared with conventional single particle processing for randomly oriented particles, the computational costs are greatly reduced due to the crystal-induced limited search space, which allows a much finer search space compared to classical single particle processing. To reduce the considerable computational costs, our software features a hybrid parallelization scheme for multi-CPU clusters and computer with high-end graphic processing units (GPUs). We successfully apply the new refinement method to the structure of the potassium channel MloK1. The calculated 3D reconstruction shows more structural details and contains less noise than the map obtained by conventional Fourier-filtering based processing of the same 2D crystal images.  相似文献   

8.
Novel approaches to bio-imaging and automated computational image processing allow the design of truly quantitative studies in developmental biology. Cell behavior, cell fate decisions, cell interactions during tissue morphogenesis, and gene expression dynamics can be analyzed in vivo for entire complex organisms and throughout embryonic development. We review state-of-the-art technology for live imaging, focusing on fluorescence light microscopy techniques for system-level investigations of animal development, and discuss computational approaches to image segmentation, cell tracking, automated data annotation, and biophysical modeling. We argue that the substantial increase in data complexity and size requires sophisticated new strategies to data analysis to exploit the enormous potential of these new resources.  相似文献   

9.
Various computational techniques have been used in pharmacological research to classify chemical compounds based on their physicochemical properties and putative biological activity. The recent publication by Schmuker and Schneider describes a new approach for the processing and classification of chemical data. Their study was motivated by nature's solution for detection and discrimination of chemical data, which is manifested in the olfactory systems of vertebrates and invertebrates.  相似文献   

10.
Computational immunology: The coming of age   总被引:3,自引:0,他引:3  
The explosive growth in biotechnology combined with major advances in information technology has the potential to radically transform immunology in the postgenomics era. Not only do we now have ready access to vast quantities of existing data, but new data with relevance to immunology are being accumulated at an exponential rate. Resources for computational immunology include biological databases and methods for data extraction, comparison, analysis and interpretation. Publicly accessible biological databases of relevance to immunologists number in the hundreds and are growing daily. The ability to efficiently extract and analyse information from these databases is vital for efficient immunology research. Most importantly, a new generation of computational immunology tools enables modelling of peptide transport by the transporter associated with antigen processing (TAP), modelling of antibody binding sites, identification of allergenic motifs and modelling of T-cell receptor serial triggering.  相似文献   

11.
This paper describes a multiple alignment method using a workstationand supercomputer. The method is based on the alignment of aset of aligned sequences with the new sequence, and uses a recursiveprocedure of such alignment. The alignment is executed in areasonable computation time on diverse levels from a workstationto a supercomputer, from the viewpoint of alignment resultsand computational speed by parallel processing. The applicationof the algorithm is illustrated by several examples of multiplealignment of 12 amino acid and DNA sequences of HIV (human immunodeficiencyvirus) env genes. Colour graphic programs on a workstation andparallel processing on a supercomputer are discussed. Received on April 26, 1988; accepted on July 7, 1988  相似文献   

12.
In this article, we present a neurologically motivated computational architecture for visual information processing. The computational architecture’s focus lies in multiple strategies: hierarchical processing, parallel and concurrent processing, and modularity. The architecture is modular and expandable in both hardware and software, so that it can also cope with multisensory integrations – making it an ideal tool for validating and applying computational neuroscience models in real time under real-world conditions. We apply our architecture in real time to validate a long-standing biologically inspired visual object recognition model, HMAX. In this context, the overall aim is to supply a humanoid robot with the ability to perceive and understand its environment with a focus on the active aspect of real-time spatiotemporal visual processing. We show that our approach is capable of simulating information processing in the visual cortex in real time and that our entropy-adaptive modification of HMAX has a higher efficiency and classification performance than the standard model (up to \(\sim \!+6\,\% \) ).  相似文献   

13.
The conformational dynamics of enzymes is a computational resource that fuses milieu signals in a nonlinear fashion. Response surface methodology can be used to elicit computational functionality from enzyme dynamics. We constructed a tabletop prototype to implement enzymatic signal processing in a device context and employed it in conjunction with malate dehydrogenase to perform the linearly inseparable exclusive-or operation. This shows that proteins can execute signal processing operations that are more complex than those performed by individual threshold elements. We view the experiments reported, though restricted to the two-variable case, as a stepping stone to computational networks that utilize the precise reproducibility of proteins, and the concomitant reproducibility of their nonlinear dynamics, to implement complex pattern transformations.  相似文献   

14.
15.
The cerebral cortex is a remarkably homogeneous structure suggesting a rather generic computational machinery. Indeed, under a variety of conditions, functions attributed to specialized areas can be supported by other regions. However, a host of studies have laid out an ever more detailed map of functional cortical areas. This leaves us with the puzzle of whether different cortical areas are intrinsically specialized, or whether they differ mostly by their position in the processing hierarchy and their inputs but apply the same computational principles. Here we show that the computational principle of optimal stability of sensory representations combined with local memory gives rise to a hierarchy of processing stages resembling the ventral visual pathway when it is exposed to continuous natural stimuli. Early processing stages show receptive fields similar to those observed in the primary visual cortex. Subsequent stages are selective for increasingly complex configurations of local features, as observed in higher visual areas. The last stage of the model displays place fields as observed in entorhinal cortex and hippocampus. The results suggest that functionally heterogeneous cortical areas can be generated by only a few computational principles and highlight the importance of the variability of the input signals in forming functional specialization.  相似文献   

16.
The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.  相似文献   

17.
18.
光学相干断层成像(optical coherence tomography,OCT)技术在成像过程中具有极大的数据量和计算量,传统的基于中央处理器(central processing unit,CPU)的计算平台难以满足OCT实时成像的需求。图形处理器(graphics processing unit,GPU)在通用计算方面具有强大的并行处理能力和数值计算能力,可以突破OCT实时成像的瓶颈。本文对GPU做了简要介绍并阐述了GPU在OCT实时成像及功能成像中的应用及研究进展。  相似文献   

19.
Flow cytometry for high-throughput, high-content screening   总被引:5,自引:0,他引:5  
Flow cytometry is a mature platform for quantitative multi-parameter measurement of cell fluorescence. Recent innovations allow up to 30-fold faster serial processing of bulk cell samples. Homogeneous discrimination of free and cell-bound fluorescent probe eliminates wash steps to streamline sample processing. Compound screening throughput may be further enhanced by multiplexing of assays on color-coded bead or cell suspension arrays and by integrating computational techniques to create smaller, focused compound libraries. Novel bead-based assay systems allow studies of real-time interactions between solubilized receptors, ligands and molecular signaling components that recapitulate and extend measurements in intact cells. These new developments, and its broad usage, position flow cytometry as an attractive analysis platform for high-throughput, high-content biological testing and drug discovery.  相似文献   

20.
The layout of sensory brain areas is thought to subtend perception. The principles shaping these architectures and their role in information processing are still poorly understood. We investigate mathematically and computationally the representation of orientation and spatial frequency in cat primary visual cortex. We prove that two natural principles, local exhaustivity and parsimony of representation, would constrain the orientation and spatial frequency maps to display a very specific pinwheel-dipole singularity. This is particularly interesting since recent experimental evidences show a dipolar structures of the spatial frequency map co-localized with pinwheels in cat. These structures have important properties on information processing capabilities. In particular, we show using a computational model of visual information processing that this architecture allows a trade-off in the local detection of orientation and spatial frequency, but this property occurs for spatial frequency selectivity sharper than reported in the literature. We validated this sharpening on high-resolution optical imaging experimental data. These results shed new light on the principles at play in the emergence of functional architecture of cortical maps, as well as their potential role in processing information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号