首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Reduced models have long been used as a tool for the analysis of the complex activity taking place in neurons and their coupled networks. Recent advances in experimental and theoretical techniques have further demonstrated the usefulness of this approach. Despite the often gross simplification of the underlying biophysical properties, reduced models can still present significant difficulties in their analysis, with the majority of exact and perturbative results available only for the leaky integrate-and-fire model. Here an elementary numerical scheme is demonstrated which can be used to calculate a number of biologically important properties of the general class of non-linear integrate-and-fire models. Exact results for the first-passage-time density and spike-train spectrum are derived, as well as the linear response properties and emergent states of recurrent networks. Given that the exponential integrate-fire model has recently been shown to agree closely with the experimentally measured response of pyramidal cells, the methodology presented here promises to provide a convenient tool to facilitate the analysis of cortical-network dynamics.  相似文献   

2.
3.
Significant scientific and translational questions remain in auditory neuroscience surrounding the neural correlates of perception. Relating perceptual and neural data collected from humans can be useful; however, human-based neural data are typically limited to evoked far-field responses, which lack anatomical and physiological specificity. Laboratory-controlled preclinical animal models offer the advantage of comparing single-unit and evoked responses from the same animals. This ability provides opportunities to develop invaluable insight into proper interpretations of evoked responses, which benefits both basic-science studies of neural mechanisms and translational applications, e.g., diagnostic development. However, these comparisons have been limited by a disconnect between the types of spectrotemporal analyses used with single-unit spike trains and evoked responses, which results because these response types are fundamentally different (point-process versus continuous-valued signals) even though the responses themselves are related. Here, we describe a unifying framework to study temporal coding of complex sounds that allows spike-train and evoked-response data to be analyzed and compared using the same advanced signal-processing techniques. The framework uses a set of peristimulus-time histograms computed from single-unit spike trains in response to polarity-alternating stimuli to allow advanced spectral analyses of both slow (envelope) and rapid (temporal fine structure) response components. Demonstrated benefits include: (1) novel spectrally specific temporal-coding measures that are less confounded by distortions due to hair-cell transduction, synaptic rectification, and neural stochasticity compared to previous metrics, e.g., the correlogram peak-height, (2) spectrally specific analyses of spike-train modulation coding (magnitude and phase), which can be directly compared to modern perceptually based models of speech intelligibility (e.g., that depend on modulation filter banks), and (3) superior spectral resolution in analyzing the neural representation of nonstationary sounds, such as speech and music. This unifying framework significantly expands the potential of preclinical animal models to advance our understanding of the physiological correlates of perceptual deficits in real-world listening following sensorineural hearing loss.  相似文献   

4.
Compartmental models of infectious diseases readily represent known biological and epidemiological processes, are easily understood in flow-chart form by administrators, are simple to adjust to new information, and lend themselves to routine statistical analysis such as parameter estimation and model fitting. Technical results are immediately interpretable in epidemiological and public health terms. Deterministic models are easily stochasticized where this is important for practical purposes. With HIV/AIDS, serial data on both HIV prevalence and AIDS morbidity have been available from San Francisco. Assuming the distribution of the incubation period to be biologically stable, statistical analysis is quite feasible in other regions, even those with no reliable HIV data. Transmission rates must be estimated locally. It is also often possible to estimate the effective size of a population subgroup at risk, from population data on AIDS morbidity only. Computer simulation provides estimates of the evolving pattern of both HIV prevalence and AIDS morbidity. Some public health questions can be answered only by appropriately formulated stochastic models.  相似文献   

5.

Background  

Expression profiling assays done by using DNA microarray technology generate enormous data sets that are not amenable to simple analysis. The greatest challenge in maximizing the use of this huge amount of data is to develop algorithms to interpret and interconnect results from different genes under different conditions. In this context, fuzzy logic can provide a systematic and unbiased way to both (i) find biologically significant insights relating to meaningful genes, thereby removing the need for expert knowledge in preliminary steps of microarray data analyses and (ii) reduce the cost and complexity of later applied machine learning techniques being able to achieve interpretable models.  相似文献   

6.
7.
Typically, in many studies in ecology, epidemiology, biomedicine and others, we are confronted with panels of short time-series of which we are interested in obtaining a biologically meaningful grouping. Here, we propose a bootstrap approach to test whether the regression functions or the variances of the error terms in a family of stochastic regression models are the same. Our general setting includes panels of time-series models as a special case. We rigorously justify the use of the test by investigating its asymptotic properties, both theoretically and through simulations. The latter confirm that for finite sample size, bootstrap provides a better approximation than classical asymptotic theory. We then apply the proposed tests to the mink-muskrat data across 81 trapping regions in Canada. Ecologically interpretable groupings are obtained, which serve as a necessary first step before a fuller biological and statistical analysis of the food chain interaction.  相似文献   

8.
Machine learning techniques offer a viable approach to cluster discovery from microarray data, which involves identifying and classifying biologically relevant groups in genes and conditions. It has been recognized that genes (whether or not they belong to the same gene group) may be co-expressed via a variety of pathways. Therefore, they can be adequately described by a diversity of coherence models. In fact, it is known that a gene may participate in multiple pathways that may or may not be co-active under all conditions. It is therefore biologically meaningful to simultaneously divide genes into functional groups and conditions into co-active categories--leading to the so-called biclustering analysis. For this, we have proposed a comprehensive set of coherence models to cope with various plausible regulation processes. Furthermore, a multivariate biclustering analysis based on fusion of different coherence models appears to be promising because the expression level of genes from the same group may follow more than one coherence models. The simulation studies further confirm that the proposed framework enjoys the advantage of high prediction performance.  相似文献   

9.
SUMMARY: Hidden Markov models (HMMs) are widely used for biological sequence analysis because of their ability to incorporate biological information in their structure. An automatic means of optimizing the structure of HMMs would be highly desirable. However, this raises two important issues; first, the new HMMs should be biologically interpretable, and second, we need to control the complexity of the HMM so that it has good generalization performance on unseen sequences. In this paper, we explore the possibility of using a genetic algorithm (GA) for optimizing the HMM structure. GAs are sufficiently flexible to allow incorporation of other techniques such as Baum-Welch training within their evolutionary cycle. Furthermore, operators that alter the structure of HMMs can be designed to favour interpretable and simple structures. In this paper, a training strategy using GAs is proposed, and it is tested on finding HMM structures for the promoter and coding region of the bacterium Campylobacter jejuni. The proposed GA for hidden Markov models (GA-HMM) allows, HMMs with different numbers of states to evolve. To prevent over-fitting, a separate dataset is used for comparing the performance of the HMMs to that used for the Baum-Welch training. The GA-HMM was capable of finding an HMM comparable to a hand-coded HMM designed for the same task, which has been published previously.  相似文献   

10.
In many medical applications, interpretable models with high prediction performance are sought. Often, those models are required to handle semistructured data like tabular and image data. We show how to apply deep transformation models (DTMs) for distributional regression that fulfill these requirements. DTMs allow the data analyst to specify (deep) neural networks for different input modalities making them applicable to various research questions. Like statistical models, DTMs can provide interpretable effect estimates while achieving the state-of-the-art prediction performance of deep neural networks. In addition, the construction of ensembles of DTMs that retain model structure and interpretability allows quantifying epistemic and aleatoric uncertainty. In this study, we compare several DTMs, including baseline-adjusted models, trained on a semistructured data set of 407 stroke patients with the aim to predict ordinal functional outcome three months after stroke. We follow statistical principles of model-building to achieve an adequate trade-off between interpretability and flexibility while assessing the relative importance of the involved data modalities. We evaluate the models for an ordinal and dichotomized version of the outcome as used in clinical practice. We show that both tabular clinical and brain imaging data are useful for functional outcome prediction, whereas models based on tabular data only outperform those based on imaging data only. There is no substantial evidence for improved prediction when combining both data modalities. Overall, we highlight that DTMs provide a powerful, interpretable approach to analyzing semistructured data and that they have the potential to support clinical decision-making.  相似文献   

11.
Li H  Huang Z  Gai J  Wu S  Zeng Y  Li Q  Wu R 《PloS one》2007,2(11):e1245
Although ontogenetic changes in body shape and its associated allometry has been studied for over a century, essentially nothing is known about their underlying genetic and developmental mechanisms. One of the reasons for this ignorance is the unavailability of a conceptual framework to formulate the experimental design for data collection and statistical models for data analyses. We developed a framework model for unraveling the genetic machinery for ontogenetic changes of allometry. The model incorporates the mathematical aspects of ontogenetic growth and allometry into a maximum likelihood framework for quantitative trait locus (QTL) mapping. As a quantitative platform, the model allows for the testing of a number of biologically meaningful hypotheses to explore the pleiotropic basis of the QTL that regulate ontogeny and allometry. Simulation studies and real data analysis of a live example in soybean have been performed to investigate the statistical behavior of the model and validate its practical utilization. The statistical model proposed will help to study the genetic architecture of complex phenotypes and, therefore, gain better insights into the mechanistic regulation for developmental patterns and processes in organisms.  相似文献   

12.
Building an accurate disease risk prediction model is an essential step in the modern quest for precision medicine. While high-dimensional genomic data provides valuable data resources for the investigations of disease risk, their huge amount of noise and complex relationships between predictors and outcomes have brought tremendous analytical challenges. Deep learning model is the state-of-the-art methods for many prediction tasks, and it is a promising framework for the analysis of genomic data. However, deep learning models generally suffer from the curse of dimensionality and the lack of biological interpretability, both of which have greatly limited their applications. In this work, we have developed a deep neural network (DNN) based prediction modeling framework. We first proposed a group-wise feature importance score for feature selection, where genes harboring genetic variants with both linear and non-linear effects are efficiently detected. We then designed an explainable transfer-learning based DNN method, which can directly incorporate information from feature selection and accurately capture complex predictive effects. The proposed DNN-framework is biologically interpretable, as it is built based on the selected predictive genes. It is also computationally efficient and can be applied to genome-wide data. Through extensive simulations and real data analyses, we have demonstrated that our proposed method can not only efficiently detect predictive features, but also accurately predict disease risk, as compared to many existing methods.  相似文献   

13.
Mathematical models have played an important role in the analysis of circadian systems. The models include simulation of differential equation systems to assess the dynamic properties of a circadian system and the use of statistical models, primarily harmonic regression methods, to assess the static properties of the system. The dynamical behaviors characterized by the simulation studies are the response of the circadian pacemaker to light, its rate of decay to its limit cycle, and its response to the rest-activity cycle. The static properties are phase, amplitude, and period of the intrinsic oscillator. Formal statistical methods are not routinely employed in simulation studies, and therefore the uncertainty in inferences based on the differential equation models and their sensitivity to model specification and parameter estimation error cannot be evaluated. The harmonic regression models allow formal statistical analysis of static but not dynamical features of the circadian pacemaker. The authors present a paradigm for analyzing circadian data based on the Box iterative scheme for statistical model building. The paradigm unifies the differential equation-based simulations (direct problem) and the model fitting approach using harmonic regression techniques (inverse problem) under a single schema. The framework is illustrated with the analysis of a core-temperature data series collected under a forced desynchrony protocol. The Box iterative paradigm provides a framework for systematically constructing and analyzing models of circadian data.  相似文献   

14.
Protein subcellular localization has been systematically characterized in budding yeast using fluorescently tagged proteins. Based on the fluorescence microscopy images, subcellular localization of many proteins can be classified automatically using supervised machine learning approaches that have been trained to recognize predefined image classes based on statistical features. Here, we present an unsupervised analysis of protein expression patterns in a set of high-resolution, high-throughput microscope images. Our analysis is based on 7 biologically interpretable features which are evaluated on automatically identified cells, and whose cell-stage dependency is captured by a continuous model for cell growth. We show that it is possible to identify most previously identified localization patterns in a cluster analysis based on these features and that similarities between the inferred expression patterns contain more information about protein function than can be explained by a previous manual categorization of subcellular localization. Furthermore, the inferred cell-stage associated to each fluorescence measurement allows us to visualize large groups of proteins entering the bud at specific stages of bud growth. These correspond to proteins localized to organelles, revealing that the organelles must be entering the bud in a stereotypical order. We also identify and organize a smaller group of proteins that show subtle differences in the way they move around the bud during growth. Our results suggest that biologically interpretable features based on explicit models of cell morphology will yield unprecedented power for pattern discovery in high-resolution, high-throughput microscopy images.  相似文献   

15.
In an analysis of capture-recapture data, the identification of a model that fits is a critical step. For the multisite (also called multistate) models used to analyze data gathered at several sites, no reliable test for assessing fit is currently available. We propose a test for the JMV model, a simple generalization of the Arnason-Schwarz (AS) model, in the form of interpretable contingency tables. For the AS model, we suggest complementing the test for the JMV model with a likelihood ratio test of AS vs. JMV. The examination of an example leads us to propose further a partitioning that emphasizes the role of the memory model of Brownie et al. (1993 Biometrics 49, 1173-1187) as a biologically more plausible alternative to the AS model.  相似文献   

16.
We introduce a novel approach for describing patterns of HIV genetic variation using regression modeling techniques. Parameters are defined for describing genetic variation within and between viral populations by generalizing Simpson's index of diversity. Regression models are specified for these variation parameters and the generalized estimating equation framework is used for estimating both the regression parameters and their corresponding variances. Conditions are described under which the usual asymptotic approximations to the distribution of the estimators are met. This approach provides a formal statistical framework for testing hypotheses regarding the changing patterns of HIV genetic variation over time within an infected patient. The application of these methods for testing biologically relevant hypotheses concerning HIV genetic variation is demonstrated in an example using sequence data from a subset of patients from the Multicenter AIDS Cohort Study.  相似文献   

17.
DNA abundance provides important information about cell physiology and proliferation activity. In a typical in vitro cellular assay, the distribution of the DNA content within a sample is comprised of cell debris, G0/G1-, S-, and G2/M-phase cells. In some circumstances, there may be a collection of cells that contain more than two copies of DNA. The primary focus of DNA content analysis is to deconvolute the overlapping mixtures of the cellular components, and subsequently to investigate whether a given treatment has perturbed the mixing proportions of the sample components. We propose a restricted mixture model that is parameterized to incorporate the available biological information. A likelihood ratio (LR) test is developed to test for changes in the mixing proportions between two cell populations. The proposed mixture model is applied to both simulated and real experimental data. The model fitting is compared with unrestricted models; the statistical inference on proportion change is compared between the proposed LR test and the Kolmogorov-Smirnov test, which is frequently used to test for differences in DNA content distribution. The proposed mixture model outperforms the existing approaches in the estimation of the mixing proportions and gives biologically interpretable results; the proposed LR test demonstrates improved sensitivity and specificity for detecting changes in the mixing proportions.  相似文献   

18.
19.
Five statistically appropriate multivariate analyses were applied to the same data on burrowing in the sea hare Aplysia brasiliana to: (1) identify homogeneous subject-related subgroups within a heterogeneous sample, and (2) compare the extent of congruency among the analyses in terms of the number of extracted subgroups and each subject's placement within the subgroups. Raw scores from 32 subjects on ten burrowing parameters were origin-corrected, standardized to z-scores, and normalized in order to facilitate comparisons among the analyses. One to five identified subgroups were extracted which indicated sensitivity differences to sampling variability among the methods. These results suggested that selecting a biologically interpretable analysis represents the subjective aspect of quantitative data treatment. Q-factor analysis (three subgroups) and linear typal analysis (four subgroups) yielded the most biologically interpretable subgroups for these data. Multidimensional scaling (one group) and principal-components analysis (two subgroups) tended to “lump” subjects, while simple distance-function cluster analysis (five subgroups) tended to “split” subjects into additional groups. As a diagonistic tool, multivariate analyses provide insight into underlying dimensions of individual variation and help generate testable hypotheses for guiding future research.  相似文献   

20.
In this paper we introduce a simple framework which provides a basis for estimating parameters and testing statistical hypotheses in complex models. The only assumption that is made in the model describing the process under study, is that the deviations of the observations from the model have a multivariate normal distribution. The application of the statistical techniques presented in this paper may have considerable utility in the analysis of a wide variety of complex biological and epidemiological models. To our knowledge, the model and methods described here have not previously been published in the area of theoretical immunology.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号