首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
2.
Does each cognitive task elicit a new cognitive network each time in the brain? Recent data suggest that pre-existing repertoires of a much smaller number of canonical network components are selectively and dynamically used to compute new cognitive tasks. To this end, we propose a novel method (graph-ICA) that seeks to extract these canonical network components from a limited number of resting state spontaneous networks. Graph-ICA decomposes a weighted mixture of source edge-sharing subnetworks with different weighted edges by applying an independent component analysis on cross-sectional brain networks represented as graphs. We evaluated the plausibility in our simulation study and identified 49 intrinsic subnetworks by applying it in the resting state fMRI data. Using the derived subnetwork repertories, we decomposed brain networks during specific tasks including motor activity, working memory exercises, and verb generation, and identified subnetworks associated with performance on these tasks. We also analyzed sex differences in utilization of subnetworks, which was useful in characterizing group networks. These results suggest that this method can effectively be utilized to identify task-specific as well as sex-specific functional subnetworks. Moreover, graph-ICA can provide more direct information on the edge weights among brain regions working together as a network, which cannot be directly obtained through voxel-level spatial ICA.  相似文献   

3.
4.
A new structure and training method for multilayer neural networks is presented. The proposed method is based on cascade training of subnetworks and optimizing weights layer by layer. The training procedure is completed in two steps. First, a subnetwork, m inputs and n outputs as the style of training samples, is trained using the training samples. Secondly the outputs of the subnetwork is taken as the inputs and the outputs of the training sample as the desired outputs, another subnetwork with n inputs and n outputs is trained. Finally the two trained subnetworks are connected and a trained multilayer neural networks is created. The numerical simulation results based on both linear least squares back-propagation (LSB) and traditional back-propagation (BP) algorithm have demonstrated the efficiency of the proposed method.  相似文献   

5.
6.

Background

Accurate prediction of cancer prognosis based on gene expression data is generally difficult, and identifying robust prognostic markers for cancer remains a challenging problem. Recent studies have shown that modular markers, such as pathway markers and subnetwork markers, can provide better snapshots of the underlying biological mechanisms by incorporating additional biological information, thereby leading to more accurate cancer classification.

Results

In this paper, we propose a novel method for simultaneously identifying robust synergistic subnetwork markers that can accurately predict cancer prognosis. The proposed method utilizes an efficient message-passing algorithm called affinity propagation, based on which we identify groups – or subnetworks – of discriminative and synergistic genes, whose protein products are closely located in the protein-protein interaction (PPI) network. Unlike other existing subnetwork marker identification methods, our proposed method can simultaneously identify multiple nonoverlapping subnetwork markers that can synergistically predict cancer prognosis.

Conclusions

Evaluation results based on multiple breast cancer datasets demonstrate that the proposed message-passing approach can identify robust subnetwork markers in the human PPI network, which have higher discriminative power and better reproducibility compared to those identified by previous methods. The identified subnetwork makers can lead to better cancer classifiers with improved overall performance and consistency across independent cancer datasets.
  相似文献   

7.
Specific memory might be stored in a subnetwork consisting of a small population of neurons. To select neurons involved in memory formation, neural competition might be essential. In this paper, we show that excitable neurons are competitive and organize into two assemblies in a recurrent network with spike timing-dependent synaptic plasticity (STDP) and axonal conduction delays. Neural competition is established by the cooperation of spontaneously induced neural oscillation, axonal conduction delays, and STDP. We also suggest that the competition mechanism in this paper is one of the basic functions required to organize memory-storing subnetworks into fine-scale cortical networks.  相似文献   

8.
Signalling pathways are complex biochemical networks responsible for regulation of numerous cellular functions. These networks function by serial and successive interactions among a large number of vital biomolecules and chemical compounds. For deciphering and analysing the underlying mechanism of such networks, a modularized study is quite helpful. Here we propose an algorithm for modularization of calcium signalling pathway of H. sapiens. The idea that “a node whose function is dependant on maximum number of other nodes tends to be the center of a subnetwork” is used to divide a large signalling network into smaller subnetworks. Inclusion of node(s) into subnetworks(s) is dependant on the outdegree of the node(s). Here outdegree of a node refers to the number of relations of the considered node lying outside the constructed subnetwork. Node(s) having more than c relations lying outside the expanding subnetwork have to be excluded from it. Here c is a specified variable based on user preference, which is finally fixed during adjustments of created subnetworks, so that certain biological significance can be conferred on them.  相似文献   

9.
A discrete model of a biological regulatory network can be represented by a discrete function that contains all available information on interactions between network components and the rules governing the evolution of the network in a finite state space. Since the state space size grows exponentially with the number of network components, analysis of large networks is a complex problem. In this paper, we introduce the notion of symbolic steady state that allows us to identify subnetworks that govern the dynamics of the original network in some region of state space. We state rules to explicitly construct attractors of the system from subnetwork attractors. Using the results, we formulate sufficient conditions for the existence of multiple attractors resp. a cyclic attractor based on the existence of positive resp. negative feedback circuits in the graph representing the structure of the system. In addition, we discuss approaches to finding symbolic steady states. We focus both on dynamics derived via synchronous as well as asynchronous update rules. Lastly, we illustrate the results by analyzing a model of T helper cell differentiation.  相似文献   

10.
Systemic approaches to the study of a biological cell or tissue rely increasingly on the use of context-specific metabolic network models. The reconstruction of such a model from high-throughput data can routinely involve large numbers of tests under different conditions and extensive parameter tuning, which calls for fast algorithms. We present fastcore, a generic algorithm for reconstructing context-specific metabolic network models from global genome-wide metabolic network models such as Recon X. fastcore takes as input a core set of reactions that are known to be active in the context of interest (e.g., cell or tissue), and it searches for a flux consistent subnetwork of the global network that contains all reactions from the core set and a minimal set of additional reactions. Our key observation is that a minimal consistent reconstruction can be defined via a set of sparse modes of the global network, and fastcore iteratively computes such a set via a series of linear programs. Experiments on liver data demonstrate speedups of several orders of magnitude, and significantly more compact reconstructions, over a rival method. Given its simplicity and its excellent performance, fastcore can form the backbone of many future metabolic network reconstruction algorithms.  相似文献   

11.
 It has been shown that dynamic recurrent neural networks are successful in identifying the complex mapping relationship between full-wave-rectified electromyographic (EMG) signals and limb trajectories during complex movements. These connectionist models include two types of adaptive parameters: the interconnection weights between the units and the time constants associated to each neuron-like unit; they are governed by continuous-time equations. Due to their internal structure, these models are particularly appropriate to solve dynamical tasks (with time-varying input and output signals). We show in this paper that the introduction of a modular organization dedicated to different aspects of the dynamical mapping including privileged communication channels can refine the architecture of these recurrent networks. We first divide the initial individual network into two communicating subnetworks. These two modules receive the same EMG signals as input but are involved in different identification tasks related to position and acceleration. We then show that the introduction of an artificial distance in the model (using a Gaussian modulation factor of weights) induces a reduced modular architecture based on a self-elimination of null synaptic weights. Moreover, this self-selected reduced model based on two subnetworks performs the identification task better than the original single network while using fewer free parameters (better learning curve and better identification quality). We also show that this modular network exhibits several features that can be considered as biologically plausible after the learning process: self-selection of a specific inhibitory communicating path between both subnetworks after the learning process, appearance of tonic and phasic neurons, and coherent distribution of the values of the time constants within each subnetwork. Received: 17 September 2001 / Accepted in revised form: 15 January 2002  相似文献   

12.
Random network models have been a popular tool for investigating cortical network dynamics. On the scale of roughly a cubic millimeter of cortex, containing about 100,000 neurons, cortical anatomy suggests a more realistic architecture. In this locally connected random network, the connection probability decreases in a Gaussian fashion with the distance between neurons. Here we present three main results from a simulation study of the activity dynamics in such networks. First, for a broad range of parameters these dynamics exhibit a stationary state of asynchronous network activity with irregular single-neuron spiking. This state can be used as a realistic model of ongoing network activity. Parametric dependence of this state and the nature of the network dynamics in other regimes are described. Second, a synchronous excitatory stimulus to a fraction of the neurons results in a strong activity response that easily dominates the network dynamics. And third, due to that activity response an embedding of a divergent-convergent feed-forward subnetwork (as in synfire chains) does not naturally lead to a stable propagation of synchronous activity in the subnetwork; this is in contrast to our earlier findings in isolated subnetworks of that type. Possible mechanisms for stabilizing the interplay of volleys of synchronous spikes and network dynamics by specific learning rules or generalizations of the subnetworks are discussed.  相似文献   

13.
14.
15.
Tracking moving objects, including one’s own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, “probabilistic population codes.” We show that a recurrent neural network—a modified form of an exponential family harmonium (EFH)—that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states.  相似文献   

16.
We address the problem of using expression data and prior biological knowledge to identify differentially expressed pathways or groups of genes. Following an idea of Ideker et al. (2002), we construct a gene interaction network and search for high-scoring subnetworks. We make several improvements in terms of scoring functions and algorithms, resulting in higher speed and accuracy and easier biological interpretation. We also assign significance levels to our results, adjusted for multiple testing. Our methods are successfully applied to three human microarray data sets, related to cancer and the immune system, retrieving several known and potential pathways. The method, denoted by the acronym GXNA (Gene eXpression Network Analysis) is implemented in software that is publicly available and can be used on virtually any microarray data set. SUPPLEMENTARY INFORMATION: The source code and executable for the software, as well as certain supplemental materials, can be downloaded from http://stat.stanford.edu/~serban/gxna.  相似文献   

17.
Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as “ALM-ANAD”. The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics.  相似文献   

18.
X-ray based Phase-Contrast Imaging (PCI) techniques have been demonstrated to enhance the visualization of soft tissues in comparison to conventional imaging methods. Nevertheless the delivered dose as reported in the literature of biomedical PCI applications often equals or exceeds the limits prescribed in clinical diagnostics. The optimization of new computed tomography strategies which include the development and implementation of advanced image reconstruction procedures is thus a key aspect. In this scenario, we implemented a dictionary learning method with a new form of convex functional. This functional contains in addition to the usual sparsity inducing and fidelity terms, a new term which forces similarity between overlapping patches in the superimposed regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing norm of the patch basis functions coefficients, and a coefficient multiplying the norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. The gradient is computed by calculating projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic data for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the European Synchrotron Radiation Facility tomography reconstruction code PyHST, has proven to be efficient and well-adapted to strongly reduce the required dose and the number of projections in medical tomography.  相似文献   

19.
We study an adaptive statistical approach to analyze brain networks represented by brain connection matrices of interregional connectivity (connectomes). Our approach is at a middle level between a global analysis and single connections analysis by considering subnetworks of the global brain network. These subnetworks represent either the inter-connectivity between two brain anatomical regions or by the intra-connectivity within the same brain anatomical region. An appropriate summary statistic, that characterizes a meaningful feature of the subnetwork, is evaluated. Based on this summary statistic, a statistical test is performed to derive the corresponding p-value. The reformulation of the problem in this way reduces the number of statistical tests in an orderly fashion based on our understanding of the problem. Considering the global testing problem, the p-values are corrected to control the rate of false discoveries. Finally, the procedure is followed by a local investigation within the significant subnetworks. We contrast this strategy with the one based on the individual measures in terms of power. We show that this strategy has a great potential, in particular in cases where the subnetworks are well defined and the summary statistics are properly chosen. As an application example, we compare structural brain connection matrices of two groups of subjects with a 22q11.2 deletion syndrome, distinguished by their IQ scores.  相似文献   

20.

Background

Boolean network modeling has been widely used to model large-scale biomolecular regulatory networks as it can describe the essential dynamical characteristics of complicated networks in a relatively simple way. When we analyze such Boolean network models, we often need to find out attractor states to investigate the converging state features that represent particular cell phenotypes. This is, however, very difficult (often impossible) for a large network due to computational complexity.

Results

There have been some attempts to resolve this problem by partitioning the original network into smaller subnetworks and reconstructing the attractor states by integrating the local attractors obtained from each subnetwork. But, in many cases, the partitioned subnetworks are still too large and such an approach is no longer useful. So, we have investigated the fundamental reason underlying this problem and proposed a novel efficient way of hierarchically partitioning a given large network into smaller subnetworks by focusing on some attractors corresponding to a particular phenotype of interest instead of considering all attractors at the same time. Using the definition of attractors, we can have a simplified update rule with fixed state values for some nodes. The resulting subnetworks were small enough to find out the corresponding local attractors which can be integrated for reconstruction of the global attractor states of the original large network.

Conclusions

The proposed approach can substantially extend the current limit of Boolean network modeling for converging state analysis of biological networks.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号