首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
In this paper, a novel multiscale hierarchical model based on finite element analysis and neural network computation was developed to link mesoscopic and macroscopic scales to simulate the bone remodeling process. The finite element calculation is performed at the macroscopic level, and trained neural networks are employed as numerical devices for substituting the finite element computation needed for the mesoscale prediction. Based on a set of mesoscale simulations of representative volume elements of bones taken from different bone sites, a neural network is trained to approximate the responses at the meso level and transferred at the macro level.  相似文献   

2.
 In biological systems, the task of computing a gait trajectory is shared between the biomechanical and nervous systems. We take the perspective that both of these seemingly different computations are examples of physical computation. Here we describe the progress that has been made toward building a minimal biped system that illustrates this idea. We embed a significant portion of the computation in physical devices, such as capacitors and transistors, to underline the potential power of emphasizing the understanding of physical computation. We describe results in the exploitation of physical computation by (1) using a passive knee to assist in dynamics computation, (2) using an oscillator to drive a monoped mechanism based on the passive knee, (3) using sensory entrainment to coordinate the mechanics with the neural oscillator, (4) coupling two such systems together mechanically at the hip and computationally via the resulting two oscillators to create a biped mechanism, and (5) demonstrating the resulting gait generation in the biped mechanism. Received: 31 October 2001 / Accepted in revised form: 17 September 2002 Correspondence to: M.A. Lewis  相似文献   

3.
The aim of this paper is to develop a multiscale hierarchical hybrid model based on finite element analysis and neural network computation to link mesoscopic scale (trabecular network level) and macroscopic (whole bone level) to simulate the process of bone remodelling. As whole bone simulation, including the 3D reconstruction of trabecular level bone, is time consuming, finite element calculation is only performed at the macroscopic level, whilst trained neural networks are employed as numerical substitutes for the finite element code needed for the mesoscale prediction. The bone mechanical properties are updated at the macroscopic scale depending on the morphological and mechanical adaptation at the mesoscopic scale computed by the trained neural network. The digital image-based modelling technique using μ-CT and voxel finite element analysis is used to capture volume elements representativeof 2 mm3 at the mesoscale level of the femoral head. The input data for the artificial neural network are a set of bone material parameters, boundary conditions and the applied stress. The output data are the updated bone properties and some trabecular bone factors. The current approach is the first model, to our knowledge, that incorporates both finite element analysis and neural network computation to rapidly simulate multilevel bone adaptation.  相似文献   

4.
DNA computing using single-molecule hybridization detection   总被引:2,自引:0,他引:2       下载免费PDF全文
DNA computing aims at using nucleic acids for computing. Since micromolar DNA solutions can act as billions of parallel nanoprocessors, DNA computers can in theory solve optimization problems that require vast search spaces. However, the actual parallelism currently being achieved is at least a hundred million-fold lower than the number of DNA molecules used. This is due to the quantity of DNA molecules of one species that is required to produce a detectable output to the computations. In order to miniaturize the computation and considerably reduce the amount of DNA needed, we have combined DNA computing with single-molecule detection. Reliable hybridization detection was achieved at the level of single DNA molecules with fluorescence cross-correlation spectroscopy. To illustrate the use of this approach, we implemented a DNA-based computation and solved a 4-variable 4-clause instance of the computationally hard Satisfiability (SAT) problem.  相似文献   

5.
The brain is a large-scale complex network often referred to as the “connectome”. Exploring the dynamic behavior of the connectome is a challenging issue as both excellent time and space resolution is required. In this context Magneto/Electroencephalography (M/EEG) are effective neuroimaging techniques allowing for analysis of the dynamics of functional brain networks at scalp level and/or at reconstructed sources. However, a tool that can cover all the processing steps of identifying brain networks from M/EEG data is still missing. In this paper, we report a novel software package, called EEGNET, running under MATLAB (Math works, inc), and allowing for analysis and visualization of functional brain networks from M/EEG recordings. EEGNET is developed to analyze networks either at the level of scalp electrodes or at the level of reconstructed cortical sources. It includes i) Basic steps in preprocessing M/EEG signals, ii) the solution of the inverse problem to localize / reconstruct the cortical sources, iii) the computation of functional connectivity among signals collected at surface electrodes or/and time courses of reconstructed sources and iv) the computation of the network measures based on graph theory analysis. EEGNET is the unique tool that combines the M/EEG functional connectivity analysis and the computation of network measures derived from the graph theory. The first version of EEGNET is easy to use, flexible and user friendly. EEGNET is an open source tool and can be freely downloaded from this webpage: https://sites.google.com/site/eegnetworks/.  相似文献   

6.
How does the connectivity of a neuronal circuit, together with the individual properties of the cell types that take part in it, result in a given computation? We examine this question in the context of retinal circuits. We suggest that the retina can be viewed as a parallel assemblage of many small computational devices, highly stereotypical and task-specific circuits afferent to a given ganglion cell type, and we discuss some rules that govern computation in these devices. Multi-device processing in retina poses conceptual problems when it is contrasted with cortical processing. We lay out open questions both on processing in retinal circuits and on implications for cortical processing of retinal inputs.  相似文献   

7.
25,26,27,28-Tetramethoxycalix[4] arene selectively captures cations by changing the conformation. In this study, a hybrid approach of ab initio molecular orbital calculation and statistical mechanics for molecular liquid was utilised to understand the capture mechanism in electrolyte solution phase at molecular level. The association free energy and solvation structure were evaluated on the basis of statistical mechanics for molecular liquids. The selectivity is correctly reproduced by the computation, namely cone conformer captures Na+ while K+ is recognised by the partial-cone conformer.  相似文献   

8.
MOTIVATION: Protein interactions provide an important context for the understanding of function. Experimental approaches have been complemented with computational ones, such as PSIMAP, which computes domain-domain interactions for all multi-domain and multi-chain proteins in the Protein Data Bank (PDB). PSIMAP has been used to determine that superfamilies occurring in many species have many interaction partners, to show examples of convergent evolution through shared interaction partners and to uncover complexes in the interaction map. To determine an interaction, the original PSIMAP algorithm checks all residue pairs of any domain pair defined by classification systems such as SCOP. The computation takes several days for the PDB. The computation of PSIMAP has two shortcomings: first, the original PSIMAP algorithm considers only interactions of residue pairs rather than atom pairs losing information for detailed analysis of contact patterns. At the atomic level the original algorithm would take months. Second, with the superlinear growth of PDB, PSIMAP is not sustainable. RESULTS: We address these two shortcomings by developing a family of new algorithms for the computation of domain-domain interactions based on the idea of bounding shapes, which are used to prune the search space. The best of the algorithms improves on the old PSIMAP algorithm by a factor of 60 on the PDB. Additionally, the algorithms allow a distributed computation, which we carry out on a farm of 80 Linux PCs. Overall, the new algorithms reduce the computation at atomic level from months to 20 min. The combination of pruning and distribution makes the new algorithm scalable and sustainable even with the superlinear growth in PDB.  相似文献   

9.
We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic and continuum. We choose the microscopic level as Kramers' bead–rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker–Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modelled with a finite volume Godunov-projection algorithm. We demonstrate the computation of viscoelastic stress divergence using this multiscale approach.  相似文献   

10.
Summary Life cycles of California populations of the grasshopper, Melanoplus sanguinipes, varied along an altitudinal gradient. Temperature records indicate a longer season at low altitude on the coast, based on computation of degree days available for development, even though summer air temperatures are cooler than at high altitude; this is a result of warm soil temperatures. At high and low altitudes there was a high proportion of diapause eggs oviposited, while intermediate proportions of diapause eggs occurred at mid altitudes. The low altitude, and especially sea level, populations diapaused at all stages of embryonic development, while at high altitudes most diapause occurred in the late stages just before hatch. Diapause was more intense at high altitudes. One result of diapause differences was delayed hatching in the sea level population. Nymphal development and development of adults to age at first reproduction were both accelerated at high altitude relative to sea level. At lower temperatures (27° C) there was a tendency for short days to accelerate development of sea level nymphs, but not high altitude nymphs. In both sea level and high altitude grasshoppers, short days accelerated maturation of adults to onset of oviposition at warm temperature (33° C); there was little reproduction at 27° C. Population differences for all traits studied appear to be largely genetic with some maternal effects possible. We interpret diapause variation at low and mid altitudes to be responses to environmental uncertainty and variations in development rates to be adaptations to prevailing season lengths.  相似文献   

11.
Computation and information processing are among the most fundamental notions in cognitive science. They are also among the most imprecisely discussed. Many cognitive scientists take it for granted that cognition involves computation, information processing, or both – although others disagree vehemently. Yet different cognitive scientists use ‘computation’ and ‘information processing’ to mean different things, sometimes without realizing that they do. In addition, computation and information processing are surrounded by several myths; first and foremost, that they are the same thing. In this paper, we address this unsatisfactory state of affairs by presenting a general and theory-neutral account of computation and information processing. We also apply our framework by analyzing the relations between computation and information processing on one hand and classicism, connectionism, and computational neuroscience on the other. We defend the relevance to cognitive science of both computation, at least in a generic sense, and information processing, in three important senses of the term. Our account advances several foundational debates in cognitive science by untangling some of their conceptual knots in a theory-neutral way. By leveling the playing field, we pave the way for the future resolution of the debates’ empirical aspects.  相似文献   

12.
Speech perception at the interface of neurobiology and linguistics   总被引:2,自引:0,他引:2  
Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, speech perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20-80 ms, approx. 150-300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that perception proceeds on the basis of internal forward models, or uses an 'analysis-by-synthesis' approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in speech.  相似文献   

13.
Technological computation is entering the quantum realm, focusing attention on biomolecular information processing systems such as proteins, as presaged by the work of Michael Conrad. Protein conformational dynamics and pharmacological evidence suggest that protein conformational states-fundamental information units ('bits') in biological systems-are governed by quantum events, and are thus perhaps akin to quantum bits ('qubits') as utilized in quantum computation. 'Real time' dynamic activities within cells are regulated by the cell cytoskeleton, particularly microtubules (MTs) which are cylindrical lattice polymers of the protein tubulin. Recent evidence shows signaling, communication and conductivity in MTs, and theoretical models have predicted both classical and quantum information processing in MTs. In this paper we show conduction pathways for electron mobility and possible quantum tunneling and superconductivity among aromatic amino acids in tubulins. The pathways within tubulin match helical patterns in the microtubule lattice structure, which lend themselves to topological quantum effects resistant to decoherence. The Penrose-Hameroff 'Orch OR' model of consciousness is reviewed as an example of the possible utility of quantum computation in MTs.  相似文献   

14.
A major obstacle in applying various hypothesis testing procedures to datasets in bioinformatics is the computation of ensuing p-values. In this paper, we define a generic branch-and-bound approach to efficient exact p-value computation and enumerate the required conditions for successful application. Explicit procedures are developed for the entire Cressie-Read family of statistics, which includes the widely used Pearson and likelihood ratio statistics in a one-way frequency table goodness-of-fit test. This new formulation constitutes a first practical exact improvement over the exhaustive enumeration performed by existing statistical software. The general techniques we develop to exploit the convexity of many statistics are also shown to carry over to contingency table tests, suggesting that they are readily extendible to other tests and test statistics of interest. Our empirical results demonstrate a speed-up of orders of magnitude over the exhaustive computation, significantly extending the practical range for performing exact tests. We also show that the relative speed-up gain increases as the null hypothesis becomes sparser, that computation precision increases with increase in speed-up, and that computation time is very moderately affected by the magnitude of the computed p-value. These qualities make our algorithm especially appealing in the regimes of small samples, sparse null distributions, and rare events, compared to the alternative asymptotic approximations and Monte Carlo samplers. We discuss several established bioinformatics applications, where small sample size, small expected counts in one or more categories (sparseness), and very small p-values do occur. Our computational framework could be applied in these, and similar cases, to improve performance.  相似文献   

15.
Rochel O  Cohen N 《Bio Systems》2007,87(2-3):260-266
Information processing in nervous systems intricately combines computation at the neuronal and network levels. Many computations may be envisioned as sequences of signal processing steps along some pathway. How can information encoded by single cells be mapped onto network population codes, and how do different modules or layers in the computation synchronize their communication and computation? These fundamental questions are particularly severe when dealing with real time streams of inputs. Here we study this problem within the context of a minimal signal perception task. In particular, we encode neuronal information by externally applying a space- and time-localized stimulus to individual neurons within a network. We show that a pulse-coupled recurrent neural network can successfully handle this task in real time, and obeys three key requirements: (i) stimulus dependence, (ii) initial-conditions independence, and (iii) accessibility by a readout mechanism. In particular, we suggest that the network's overall level of activity can be used as a temporal cue for a robust readout mechanism. Within this framework, the network can rapidly map a local stimulus onto a population code that can then be reliably read out during some narrow but well defined window of time.  相似文献   

16.
Halachev MR  Loman NJ  Pallen MJ 《PloS one》2011,6(12):e28388
Among proteins, orthologs are defined as those that are derived by vertical descent from a single progenitor in the last common ancestor of their host organisms. Our goal is to compute a complete set of protein orthologs derived from all currently available complete bacterial and archaeal genomes. Traditional approaches typically rely on all-against-all BLAST searching which is prohibitively expensive in terms of hardware requirements or computational time (requiring an estimated 18 months or more on a typical server). Here, we present xBASE-Orth, a system for ongoing ortholog annotation, which applies a "divide and conquer" approach and adopts a pragmatic scheme that trades accuracy for speed. Starting at species level, xBASE-Orth carefully constructs and uses pan-genomes as proxies for the full collections of coding sequences at each level as it progressively climbs the taxonomic tree using the previously computed data. This leads to a significant decrease in the number of alignments that need to be performed, which translates into faster computation, making ortholog computation possible on a global scale. Using xBASE-Orth, we analyzed an NCBI collection of 1,288 bacterial and 94 archaeal complete genomes with more than 4 million coding sequences in 5 weeks and predicted more than 700 million ortholog pairs, clustered in 175,531 orthologous groups. We have also identified sets of highly conserved bacterial and archaeal orthologs and in so doing have highlighted anomalies in genome annotation and in the proposed composition of the minimal bacterial genome. In summary, our approach allows for scalable and efficient computation of the bacterial and archaeal ortholog annotations. In addition, due to its hierarchical nature, it is suitable for incorporating novel complete genomes and alternative genome annotations. The computed ortholog data and a continuously evolving set of applications based on it are integrated in the xBASE database, available at http://www.xbase.ac.uk/.  相似文献   

17.
On the reduction of errors in DNA computation.   总被引:1,自引:0,他引:1  
In this paper, we discuss techniques for reducing errors in DNA computation. We investigate several methods for achieving acceptable overall error rates for a computation using basic operations that are error prone. We analyze a single essential biotechnology, sequence-specific separation, and show that separation errors theoretically can be reduced to tolerable levels by invoking a tradeoff between time, space, and error rates at the level of algorithm design. These tradeoffs do not depend upon improvement of the underlying biotechnology which implements the separation step. We outline several specific ways in which error reduction can be done and present numerical calculations of their performance.  相似文献   

18.
The visual system of the fly performs various computations on photoreceptor outputs. The detection and measurement of movement is based on simple nonlinear multiplication-like interactions between adjacent pairs and groups of photoreceptors. The position of a small contrasted object against a uniform background is measured, at least in part, by (formally) 1-input nonlinear flicker detectors. A fly can also detect and discriminate a figure that moves relative to a ground texture. This computation of relative movement relies on a more complex algorithm, one which detects discontinuities in the movement field. The experiments described in this paper indicate that the outputs of neighbouring movement detectors interact in a multiplication-like fashion and then in turn inhibit locally the flicker detectors. The following main characteristic properties (partly a direct consequence of the algorithm's structure) have been established experimentally: a) Coherent motion of figure and ground inhibit the position detectors whereas incoherent motion fails to produce inhibition near the edges of the moving figure (provided the textures of figure and ground are similar). b) The movement detectors underlying this particular computation are direction-insensitive at input frequencies (at the photoreceptor level) above 2.3 Hz. They become increasingly direction-sensitive for lower input frequencies. c) At higher input frequencies the fly cannot discriminate an object against a texture oscillating at the same frequency and amplitude at 0° and 180° phase, whereas 90° or 270° phase shift between figure and ground oscillations yields maximum discrimination. d) Under conditions of coherent movement, strong spatial incoherence is detected by the same mechanism. The algorithm underlying the relative movement computation is further discussed as an example of a coherence measuring process, operating on the outputs of an array of movement detectors. Possible neural correlates are also mentioned.  相似文献   

19.
If aesthetics is a human universal, it should have a neurobiological basis. Although use of all the senses is, as Aristotle noted, pleasurable, the distance senses are primarily involved in aesthetics. The aesthetic response emerges from the central processing of sensory input. This occurs very rapidly, beneath the level of consciousness, and only the feeling of pleasure emerges into the conscious mind. This is exemplified by landscape appreciation, where it is suggested that a computation built into the nervous system during Paleolithic hunter-gathering is at work. Another inbuilt computation leading to an aesthetic response is the part-whole relationship. This, it is argued, may be traced to the predator-prey "arms races" of evolutionary history. Mate selection also may be responsible for part of our response to landscape and visual art. Aesthetics lies at the core of human mentality, and its study is consequently of importance not only to philosophers and art critics but also to neurobiologists.  相似文献   

20.
Sequence analysis is the basis of bioinformatics, while sequence alignment is a fundamental task for sequence analysis. The widely used alignment algorithm, Dynamic Programming, though generating optimal alignment, takes too much time due to its high computation complexity O(N(2)). In order to reduce computation complexity without sacrificing too much accuracy, we have developed a new approach to align two homologous sequences. The new approach presented here, adopting our novel algorithm which combines the methods of probabilistic and combinatorial analysis, reduces the computation complexity to as low as O(N). The computation speed by our program is at least 15 times faster than traditional pairwise alignment algorithms without a loss of much accuracy. We hence named the algorithm Super Pairwise Alignment (SPA). The pairwise alignment execution program based on SPA and the detailed results of the aligned sequences discussed in this article are available upon request.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号