首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Resting-state functional brain imaging studies of network connectivity have long assumed that functional connections are stationary on the timescale of a typical scan. Interest in moving beyond this simplifying assumption has emerged only recently. The great hope is that training the right lens on time-varying properties of whole-brain network connectivity will shed additional light on previously concealed brain activation patterns characteristic of serious neurological or psychiatric disorders. We present evidence that multiple explicitly dynamical properties of time-varying whole-brain network connectivity are strongly associated with schizophrenia, a complex mental illness whose symptomatic presentation can vary enormously across subjects. As with so much brain-imaging research, a central challenge for dynamic network connectivity lies in determining transformations of the data that both reduce its dimensionality and expose features that are strongly predictive of important population characteristics. Our paper introduces an elegant, simple method of reducing and organizing data around which a large constellation of mutually informative and intuitive dynamical analyses can be performed. This framework combines a discrete multidimensional data-driven representation of connectivity space with four core dynamism measures computed from large-scale properties of each subject’s trajectory, ie., properties not identifiable with any specific moment in time and therefore reasonable to employ in settings lacking inter-subject time-alignment, such as resting-state functional imaging studies. Our analysis exposes pronounced differences between schizophrenia patients (Nsz = 151) and healthy controls (Nhc = 163). Time-varying whole-brain network connectivity patterns are found to be markedly less dynamically active in schizophrenia patients, an effect that is even more pronounced in patients with high levels of hallucinatory behavior. To the best of our knowledge this is the first demonstration that high-level dynamic properties of whole-brain connectivity, generic enough to be commensurable under many decompositions of time-varying connectivity data, exhibit robust and systematic differences between schizophrenia patients and healthy controls.  相似文献   

3.
The hidden Markov model (HMM) is a framework for time series analysis widely applied to single-molecule experiments. Although initially developed for applications outside the natural sciences, the HMM has traditionally been used to interpret signals generated by physical systems, such as single molecules, evolving in a discrete state space observed at discrete time levels dictated by the data acquisition rate. Within the HMM framework, transitions between states are modeled as occurring at the end of each data acquisition period and are described using transition probabilities. Yet, whereas measurements are often performed at discrete time levels in the natural sciences, physical systems evolve in continuous time according to transition rates. It then follows that the modeling assumptions underlying the HMM are justified if the transition rates of a physical process from state to state are small as compared to the data acquisition rate. In other words, HMMs apply to slow kinetics. The problem is, because the transition rates are unknown in principle, it is unclear, a priori, whether the HMM applies to a particular system. For this reason, we must generalize HMMs for physical systems, such as single molecules, because these switch between discrete states in “continuous time”. We do so by exploiting recent mathematical tools developed in the context of inferring Markov jump processes and propose the hidden Markov jump process. We explicitly show in what limit the hidden Markov jump process reduces to the HMM. Resolving the discrete time discrepancy of the HMM has clear implications: we no longer need to assume that processes, such as molecular events, must occur on timescales slower than data acquisition and can learn transition rates even if these are on the same timescale or otherwise exceed data acquisition rates.  相似文献   

4.
Two complexity parameters of EEG, i.e. approximate entropy (ApEn) and Kolmogorov complexity (Kc) are utilized to characterize the complexity and irregularity of EEG data under the different mental fatigue states. Then the kernel principal component analysis (KPCA) and Hidden Markov Model (HMM) are combined to differentiate two mental fatigue states. The KPCA algorithm is employed to extract nonlinear features from the complexity parameters of EEG and improve the generalization performance of HMM. The investigation suggests that ApEn and Kc can effectively describe the dynamic complexity of EEG, which is strongly correlated with mental fatigue. Both complexity parameters are significantly decreased (P < 0.005) as the mental fatigue level increases. These complexity parameters may be used as the indices of the mental fatigue level. Moreover, the joint KPCA–HMM method can effectively reduce the dimensionality of the feature vectors, accelerate the classification speed and achieve higher classification accuracy (84%) of mental fatigue. Hence KPCA–HMM could be a promising model for the estimation of mental fatigue.  相似文献   

5.
In analysis of bioinformatics data, a unique challenge arises from the high dimensionality of measurements. Without loss of generality, we use genomic study with gene expression measurements as a representative example but note that analysis techniques discussed in this article are also applicable to other types of bioinformatics studies. Principal component analysis (PCA) is a classic dimension reduction approach. It constructs linear combinations of gene expressions, called principal components (PCs). The PCs are orthogonal to each other, can effectively explain variation of gene expressions, and may have a much lower dimensionality. PCA is computationally simple and can be realized using many existing software packages. This article consists of the following parts. First, we review the standard PCA technique and their applications in bioinformatics data analysis. Second, we describe recent 'non-standard' applications of PCA, including accommodating interactions among genes, pathways and network modules and conducting PCA with estimating equations as opposed to gene expressions. Third, we introduce several recently proposed PCA-based techniques, including the supervised PCA, sparse PCA and functional PCA. The supervised PCA and sparse PCA have been shown to have better empirical performance than the standard PCA. The functional PCA can analyze time-course gene expression data. Last, we raise the awareness of several critical but unsolved problems related to PCA. The goal of this article is to make bioinformatics researchers aware of the PCA technique and more importantly its most recent development, so that this simple yet effective dimension reduction technique can be better employed in bioinformatics data analysis.  相似文献   

6.
We present the codimensional principal component analysis (PCA), a novel and straightforward method for resolving sample heterogeneity within a set of cryo-EM 2D projection images of macromolecular assemblies. The method employs PCA of resampled 3D structures computed using subsets of 2D data obtained with a novel hypergeometric sampling scheme. PCA provides us with a small subset of dominating "eigenvolumes" of the system, whose reprojections are compared with experimental projection data to yield their factorial coordinates constructed in a common framework of the 3D space of the macromolecule. Codimensional PCA is unique in the dramatic reduction of dimensionality of the problem, which facilitates rapid determination of both the plausible number of conformers in the sample and their 3D structures. We applied the codimensional PCA to a complex data set of Thermus thermophilus 70S ribosome, and we identified four major conformational states and visualized high mobility of the stalk base region.  相似文献   

7.
Nguyen PH 《Proteins》2006,65(4):898-913
Employing the recently developed hierarchical nonlinear principal component analysis (NLPCA) method of Saegusa et al. (Neurocomputing 2004;61:57-70 and IEICE Trans Inf Syst 2005;E88-D:2242-2248), the complexities of the free energy landscapes of several peptides, including triglycine, hexaalanine, and the C-terminal beta-hairpin of protein G, were studied. First, the performance of this NLPCA method was compared with the standard linear principal component analysis (PCA). In particular, we compared two methods according to (1) the ability of the dimensionality reduction and (2) the efficient representation of peptide conformations in low-dimensional spaces spanned by the first few principal components. The study revealed that NLPCA reduces the dimensionality of the considered systems much better, than did PCA. For example, in order to get the similar error, which is due to representation of the original data of beta-hairpin in low dimensional space, one needs 4 and 21 principal components of NLPCA and PCA, respectively. Second, by representing the free energy landscapes of the considered systems as a function of the first two principal components obtained from PCA, we obtained the relatively well-structured free energy landscapes. In contrast, the free energy landscapes of NLPCA are much more complicated, exhibiting many states which are hidden in the PCA maps, especially in the unfolded regions. Furthermore, the study also showed that many states in the PCA maps are mixed up by several peptide conformations, while those of the NLPCA maps are more pure. This finding suggests that the NLPCA should be used to capture the essential features of the systems.  相似文献   

8.
ABSTRACT: BACKGROUND: Many methods for dimensionality reduction of large data sets such as those generated in microarray studies boil down to the Singular Value Decomposition (SVD). Although singular vectors associated with the largest singular values have strong optimality properties and can often be quite useful as a tool to summarize the data, they are linear combinations of up to all of the data points, and thus it is typically quite hard to interpret those vectors in terms of the application domain from which the data are drawn. Recently, an alternative dimensionality reduction paradigm, CUR matrix decompositions, has been proposed to address this problem and has been applied to genetic and internet data. CUR decompositions are low-rank matrix decompositions that are explicitly expressed in terms of a small number of actual columns and/or actual rows of the data matrix. Since they are constructed from actual data elements, CUR decompositions are interpretable by practitioners of the eld from which the data are drawn. RESULTS: We present an implementation to perform CUR matrix decompositions, in the form of a freely available, open source R-package called rCUR. This package will help users to perform CUR-based analysis on large-scale data, such as those obtained from different high-throughput technologies, in an interactive and exploratory manner. We show two examples that illustrate how CUR-based techniques make it possible to reduce signicantly the number of probes, while at the same time maintaining major trends in data and keeping the same classication accuracy. CONCLUSIONS: The package rCUR provides functions for the users to perform CUR-based matrix decompositions in the R environment. In gene expression studies, it gives an additional way of analysis of differential expression and discriminant gene selection based on the use of statistical leverage scores. These scores, which have been used historically in diagnostic regression analysis to identify outliers, can be used by rCUR to identify the most informative data points with respect to which to express the remaining data points.  相似文献   

9.
10.
Vilis O. Nams 《Ecology letters》2014,17(10):1228-1237
Animal movement paths show variation in space caused by qualitative shifts in behaviours. I present a method that (1) uses both movement path data and ancillary sensor data to detect natural breakpoints in animal behaviour and (2) groups these segments into different behavioural states. The method can also combine analyses of different path segments or paths from different individuals. It does not assume any underlying movement mechanism. I give an example with simulated data. I also show the effects of random variation, # of states and # of segments on this method. I present a case study of a fisher movement path spanning 8 days, which shows four distinct behavioural states divided into 28 path segments when only turning angles and speed were considered. When accelerometer data were added, the analysis shows seven distinct behavioural states divided into 41 path segments.  相似文献   

11.
This paper presents a nonlinear principal component analysis (PCA) that identifies underlying sources causing the expression of spatial modes or patterns of activity in neuroimaging time-series. The critical aspect of this technique is that, in relation to conventional PCA, the sources can interact to produce (second-order) spatial modes that represent the modulation of one (first-order) spatial mode by another. This nonlinear PCA uses a simple neural network architecture that embodies a specific form for the nonlinear mixing of sources that cause observed data. This form is motivated by a second-order approximation to any general nonlinear mixing and emphasizes interactions among pairs of sources. By introducing these nonlinearities principal components obtain with a unique rotation and scaling that does not depend on the biologically implausible constraints adopted by conventional PCA. The technique is illustrated by application to functional (positron emission tomography and functional magnetic resonance imaging) imaging data where the ensuing first- and second-order modes can be interpreted in terms of distributed brain systems. The interactions among sources render the expression of any one mode context-sensitive, where that context is established by the expression of other modes. The examples considered include interactions between cognitive states and time (i.e. adaptation or plasticity in PET data) and among functionally specialized brain systems (using a fMRI study of colour and motion processing).  相似文献   

12.
Thalamocortical dynamics, the millisecond to second changes in activity of thalamocortical circuits, are central to perception, action and cognition. Generated by local circuitry and sculpted by neuromodulatory systems, these dynamics reflect the expression of vigilance states. In sleep, thalamocortical dynamics are thought to mediate "offline" functions including memory consolidation and synaptic scaling. Here, I discuss thalamocortical sleep dynamics and their modulation by the ascending arousal system and locally released neurochemicals. I focus on modulation of these dynamics by electrically silent astrocytes, highlighting the role of purinergic signaling in this glial form of communication. Astrocytes modulate cortical slow oscillations, sleep behavior, and sleep-dependent cognitive function. The discovery that astrocytes can modulate sleep dynamics and sleep-related behaviors suggests a new way of thinking about the brain, in which integrated circuits of neurons and glia control information processing and behavioral output.  相似文献   

13.
 Transitions between distinct kinetic states of an ion channel are described by a Markov process. Hidden Markov models (HMM) have been successfully applied in the analysis of single ion channel recordings with a small signal-to-noise ratio. However, we have recently shown that the anti-aliasing low-pass filter misleads parameter estimation. Here, we show for the case of a Na+ channel recording that the standard HMM do neither allow parameter estimation nor a correct identification of the gating scheme. In particular, the number of closed and open states is determined incorrectly, whereas a modified HMM considering the anti-aliasing filter (moving-average filtered HMM) is able to reproduce the characteristic properties of the time series and to perform gating scheme identification. Received: 11 February 1999 / Revised version: 18 June 1999 / Accepted: 21 June 1999  相似文献   

14.
Hörnquist M  Hertz J  Wahde M 《Bio Systems》2003,71(3):311-317
Large-scale expression data are today measured for thousands of genes simultaneously. This development has been followed by an exploration of theoretical tools to get as much information out of these data as possible. Several groups have used principal component analysis (PCA) for this task. However, since this approach is data-driven, care must be taken in order not to analyze the noise instead of the data. As a strong warning towards uncritical use of the output from a PCA, we employ a newly developed procedure to judge the effective dimensionality of a specific data set. Although this data set is obtained during the development of rat central nervous system, our finding is a general property of noisy time series data. Based on knowledge of the noise-level for the data, we find that the effective number of dimensions that are meaningful to use in a PCA is much lower than what could be expected from the number of measurements. We attribute this fact both to effects of noise and the lack of independence of the expression levels. Finally, we explore the possibility to increase the dimensionality by performing more measurements within one time series, and conclude that this is not a fruitful approach.  相似文献   

15.
Conventional methods used to characterize multidimensional neural feature selectivity, such as spike-triggered covariance (STC) or maximally informative dimensions (MID), are limited to Gaussian stimuli or are only able to identify a small number of features due to the curse of dimensionality. To overcome these issues, we propose two new dimensionality reduction methods that use minimum and maximum information models. These methods are information theoretic extensions of STC that can be used with non-Gaussian stimulus distributions to find relevant linear subspaces of arbitrary dimensionality. We compare these new methods to the conventional methods in two ways: with biologically-inspired simulated neurons responding to natural images and with recordings from macaque retinal and thalamic cells responding to naturalistic time-varying stimuli. With non-Gaussian stimuli, the minimum and maximum information methods significantly outperform STC in all cases, whereas MID performs best in the regime of low dimensional feature spaces.  相似文献   

16.
17.
Evidence accumulation models provide a dominant account of human decision-making, and have been particularly successful at explaining behavioral and neural data in laboratory paradigms using abstract, stationary stimuli. It has been proposed, but with limited in-depth investigation so far, that similar decision-making mechanisms are involved in tasks of a more embodied nature, such as movement and locomotion, by directly accumulating externally measurable sensory quantities of which the precise, typically continuously time-varying, magnitudes are important for successful behavior. Here, we leverage collision threat detection as a task which is ecologically relevant in this sense, but which can also be rigorously observed and modelled in a laboratory setting. Conventionally, it is assumed that humans are limited in this task by a perceptual threshold on the optical expansion rate–the visual looming–of the obstacle. Using concurrent recordings of EEG and behavioral responses, we disprove this conventional assumption, and instead provide strong evidence that humans detect collision threats by accumulating the continuously time-varying visual looming signal. Generalizing existing accumulator model assumptions from stationary to time-varying sensory evidence, we show that our model accounts for previously unexplained empirical observations and full distributions of detection response. We replicate a pre-response centroparietal positivity (CPP) in scalp potentials, which has previously been found to correlate with accumulated decision evidence. In contrast with these existing findings, we show that our model is capable of predicting the onset of the CPP signature rather than its buildup, suggesting that neural evidence accumulation is implemented differently, possibly in distinct brain regions, in collision detection compared to previously studied paradigms.  相似文献   

18.
Recent studies have shown that multivariate pattern analysis (MVPA) can be useful for distinguishing brain disorders into categories. Such analyses can substantially enrich and facilitate clinical diagnoses. Using MPVA methods, whole brain functional networks, especially those derived using different frequency windows, can be applied to detect brain states. We constructed whole brain functional networks for groups of vascular dementia (VaD) patients and controls using resting state BOLD-fMRI (rsfMRI) data from three frequency bands - slow-5 (0.01∼0.027 Hz), slow-4 (0.027∼0.073 Hz), and whole-band (0.01∼0.073 Hz). Then we used the support vector machine (SVM), a type of MVPA classifier, to determine the patterns of functional connectivity. Our results showed that the brain functional networks derived from rsfMRI data (19 VaD patients and 20 controls) in these three frequency bands appear to reflect neurobiological changes in VaD patients. Such differences could be used to differentiate the brain states of VaD patients from those of healthy individuals. We also found that the functional connectivity patterns of the human brain in the three frequency bands differed, as did their ability to differentiate brain states. Specifically, the ability of the functional connectivity pattern to differentiate VaD brains from healthy ones was more efficient in the slow-5 (0.01∼0.027 Hz) band than in the other two frequency bands. Our findings suggest that the MVPA approach could be used to detect abnormalities in the functional connectivity of VaD patients in distinct frequency bands. Identifying such abnormalities may contribute to our understanding of the pathogenesis of VaD.  相似文献   

19.
Synchronization of neural oscillations is thought to facilitate communication in the brain. Neurodegenerative pathologies such as Parkinson’s disease (PD) can result in synaptic reorganization of the motor circuit, leading to altered neuronal dynamics and impaired neural communication. Treatments for PD aim to restore network function via pharmacological means such as dopamine replacement, or by suppressing pathological oscillations with deep brain stimulation. We tested the hypothesis that brain stimulation can operate beyond a simple “reversible lesion” effect to augment network communication. Specifically, we examined the modulation of beta band (14–30 Hz) activity, a known biomarker of motor deficits and potential control signal for stimulation in Parkinson’s. To do this we setup a neural mass model of population activity within the cortico-basal ganglia-thalamic (CBGT) circuit with parameters that were constrained to yield spectral features comparable to those in experimental Parkinsonism. We modulated the connectivity of two major pathways known to be disrupted in PD and constructed statistical summaries of the spectra and functional connectivity of the resulting spontaneous activity. These were then used to assess the network-wide outcomes of closed-loop stimulation delivered to motor cortex and phase locked to subthalamic beta activity. Our results demonstrate that the spatial pattern of beta synchrony is dependent upon the strength of inputs to the STN. Precisely timed stimulation has the capacity to recover network states, with stimulation phase inducing activity with distinct spectral and spatial properties. These results provide a theoretical basis for the design of the next-generation brain stimulators that aim to restore neural communication in disease.  相似文献   

20.
Single-molecule fluorescence resonance energy transfer (smFRET) measurement is a powerful technique for investigating dynamics of biomolecules, for which various efforts have been made to overcome significant stochastic noise. Time stamp (TS) measurement has been employed experimentally to enrich information within the signals, while data analyses such as the hidden Markov model (HMM) have been successfully applied to recover the trajectories of molecular state transitions from time-binned photon counting signals or images. In this article, we introduce the HMM for TS-FRET signals, employing the variational Bayes (VB) inference to solve the model, and demonstrate the application of VB-HMM-TS-FRET to simulated TS-FRET data. The same analysis using VB-HMM is conducted for other models and the previously reported change point detection scheme. The performance is compared to other analysis methods or data types and we show that our VB-HMM-TS-FRET analysis can achieve the best performance and results in the highest time resolution. Finally, an smFRET experiment was conducted to observe spontaneous branch migration of Holliday-junction DNA. VB-HMM-TS-FRET was successfully applied to reconstruct the state transition trajectory with the number of states consistent with the nucleotide sequence. The results suggest that a single migration process frequently involves rearrangement of multiple basepairs.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号