首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Attentional control ensures that neuronal processes prioritize the most relevant stimulus in a given environment. Controlling which stimulus is attended thus originates from neurons encoding the relevance of stimuli, i.e. their expected value, in hand with neurons encoding contextual information about stimulus locations, features, and rules that guide the conditional allocation of attention. Here, we examined how these distinct processes are encoded and integrated in macaque prefrontal cortex (PFC) by mapping their functional topographies at the time of attentional stimulus selection. We find confined clusters of neurons in ventromedial PFC (vmPFC) that predominantly convey stimulus valuation information during attention shifts. These valuation signals were topographically largely separated from neurons predicting the stimulus location to which attention covertly shifted, and which were evident across the complete medial-to-lateral extent of the PFC, encompassing anterior cingulate cortex (ACC), and lateral PFC (LPFC). LPFC responses showed particularly early-onset selectivity and primarily facilitated attention shifts to contralateral targets. Spatial selectivity within ACC was delayed and heterogeneous, with similar proportions of facilitated and suppressed responses during contralateral attention shifts. The integration of spatial and valuation signals about attentional target stimuli was observed in a confined cluster of neurons at the intersection of vmPFC, ACC, and LPFC. These results suggest that valuation processes reflecting stimulus-specific outcome predictions are recruited during covert attentional control. Value predictions and the spatial identification of attentional targets were conveyed by largely separate neuronal populations, but were integrated locally at the intersection of three major prefrontal areas, which may constitute a functional hub within the larger attentional control network.  相似文献   

2.
Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population.  相似文献   

3.
Stimulus selectivity of sensory systems is often characterized by analyzing response-conditioned stimulus ensembles. However, in many cases these response-triggered stimulus sets have structure that is more complex than assumed. If not taken into account, when present it will bias the estimates of many simple statistics, and distort the estimated stimulus selectivity of a neural sensory system. We present an approach that mitigates these problems by modeling some of the response-conditioned stimulus structure as being generated by a set of transformations acting on a simple stimulus distribution. This approach corrects the estimates of key statistics and counters biases introduced by the transformations. In cases involving temporal spike jitter or spatial jitter of images, the main observed effects of transformations are blurring of the conditional mean and introduction of artefacts in the spectral decomposition of the conditional covariance matrix. We illustrate this approach by analyzing and correcting a set of model stimuli perturbed by temporal and spatial jitter. We apply the approach to neurophysiological data from the cricket cercal sensory system to correct the effects of temporal jitter. Action Editor: Matthew Wiener  相似文献   

4.
The multidimensional computations performed by many biological systems are often characterized with limited information about the correlations between inputs and outputs. Given this limitation, our approach is to construct the maximum noise entropy response function of the system, leading to a closed-form and minimally biased model consistent with a given set of constraints on the input/output moments; the result is equivalent to conditional random field models from machine learning. For systems with binary outputs, such as neurons encoding sensory stimuli, the maximum noise entropy models are logistic functions whose arguments depend on the constraints. A constraint on the average output turns the binary maximum noise entropy models into minimum mutual information models, allowing for the calculation of the information content of the constraints and an information theoretic characterization of the system's computations. We use this approach to analyze the nonlinear input/output functions in macaque retina and thalamus; although these systems have been previously shown to be responsive to two input dimensions, the functional form of the response function in this reduced space had not been unambiguously identified. A second order model based on the logistic function is found to be both necessary and sufficient to accurately describe the neural responses to naturalistic stimuli, accounting for an average of 93% of the mutual information with a small number of parameters. Thus, despite the fact that the stimulus is highly non-Gaussian, the vast majority of the information in the neural responses is related to first and second order correlations. Our results suggest a principled and unbiased way to model multidimensional computations and determine the statistics of the inputs that are being encoded in the outputs.  相似文献   

5.
Task-optimized convolutional neural networks (CNNs) show striking similarities to the ventral visual stream. However, human-imperceptible image perturbations can cause a CNN to make incorrect predictions. Here we provide insight into this brittleness by investigating the representations of models that are either robust or not robust to image perturbations. Theory suggests that the robustness of a system to these perturbations could be related to the power law exponent of the eigenspectrum of its set of neural responses, where power law exponents closer to and larger than one would indicate a system that is less susceptible to input perturbations. We show that neural responses in mouse and macaque primary visual cortex (V1) obey the predictions of this theory, where their eigenspectra have power law exponents of at least one. We also find that the eigenspectra of model representations decay slowly relative to those observed in neurophysiology and that robust models have eigenspectra that decay slightly faster and have higher power law exponents than those of non-robust models. The slow decay of the eigenspectra suggests that substantial variance in the model responses is related to the encoding of fine stimulus features. We therefore investigated the spatial frequency tuning of artificial neurons and found that a large proportion of them preferred high spatial frequencies and that robust models had preferred spatial frequency distributions more aligned with the measured spatial frequency distribution of macaque V1 cells. Furthermore, robust models were quantitatively better models of V1 than non-robust models. Our results are consistent with other findings that there is a misalignment between human and machine perception. They also suggest that it may be useful to penalize slow-decaying eigenspectra or to bias models to extract features of lower spatial frequencies during task-optimization in order to improve robustness and V1 neural response predictivity.  相似文献   

6.
Bezzi M 《Bio Systems》2007,89(1-3):4-9
Information theory - in particular mutual information- has been widely used to investigate neural processing in various brain areas. Shannon mutual information quantifies how much information is, on average, contained in a set of neural activities about a set of stimuli. To extend a similar approach to single stimulus encoding, we need to introduce a quantity specific for a single stimulus. This quantity has been defined in literature by four different measures, but none of them satisfies the same intuitive properties (non-negativity, additivity), that characterize mutual information. We present here a detailed analysis of the different meanings and properties of these four definitions. We show that all these measures satisfy, at least, a weaker additivity condition, i.e. limited to the response set. This allows us to use them for analysing correlated coding, as we illustrate in a toy-example from hippocampal place cells.  相似文献   

7.
We evaluate statistical models used in two-hypothesis tests for identifying peptides from tandem mass spectrometry data. The null hypothesis H(0), that a peptide matches a spectrum by chance, requires information on the probability of by-chance matches between peptide fragments and peaks in the spectrum. Likewise, the alternate hypothesis H(A), that the spectrum is due to a particular peptide, requires probabilities that the peptide fragments would indeed be observed if it was the causative agent. We compare models for these probabilities by determining the identification rates produced by the models using an independent data set. The initial models use different probabilities depending on fragment ion type, but uniform probabilities for each ion type across all of the labile bonds along the backbone. More sophisticated models for probabilities under both H(A) and H(0) are introduced that do not assume uniform probabilities for each ion type. In addition, the performance of these models using a standard likelihood model is compared to an information theory approach derived from the likelihood model. Also, a simple but effective model for incorporating peak intensities is described. Finally, a support-vector machine is used to discriminate between correct and incorrect identifications based on multiple characteristics of the scoring functions. The results are shown to reduce the misidentification rate significantly when compared to a benchmark cross-correlation based approach.  相似文献   

8.
Conditional probability methods for haplotyping in pedigrees   总被引:3,自引:0,他引:3  
Gao G  Hoeschele I  Sorensen P  Du F 《Genetics》2004,167(4):2055-2065
Efficient haplotyping in pedigrees is important for the fine mapping of quantitative trait locus (QTL) or complex disease genes. To reconstruct haplotypes efficiently for a large pedigree with a large number of linked loci, two algorithms based on conditional probabilities and likelihood computations are presented. The first algorithm (the conditional probability method) produces a single, approximately optimal haplotype configuration, with computing time increasing linearly in the number of linked loci and the pedigree size. The other algorithm (the conditional enumeration method) identifies a set of haplotype configurations with high probabilities conditional on the observed genotype data for a pedigree. Its computing time increases less than exponentially with the size of a subset of the set of person-loci with unordered genotypes and linearly with its complement. The size of the subset is controlled by a threshold parameter. The set of identified haplotype configurations can be used to estimate the identity-by-descent (IBD) matrix at a map position for a pedigree. The algorithms have been tested on published and simulated data sets. The new haplotyping methods are much faster and provide more information than several existing stochastic and rule-based methods. The accuracies of the new methods are equivalent to or better than those of these existing methods.  相似文献   

9.
Probabilistic models and maximum likelihood estimation have been used to predict the occurrence of decompression sickness (DCS). We indicate a means of extending the maximum likelihood parameter estimation procedure to make use of knowledge of the time at which DCS occurs. Two models were compared in fitting a data set of nearly 1,000 exposures, in which greater than 50 cases of DCS have known times of symptom onset. The additional information provided by the time at which DCS occurred gave us better estimates of model parameters. It was also possible to discriminate between good models, which predict both the occurrence of DCS and the time at which symptoms occur, and poorer models, which may predict only the overall occurrence. The refined models may be useful in new applications for customizing decompression strategies during complex dives involving various times at several different depths. Conditional probabilities of DCS for such dives may be reckoned as the dive is taking place and the decompression strategy adjusted to circumstance. Some of the mechanistic implications and the assumptions needed for safe application of decompression strategies on the basis of conditional probabilities are discussed.  相似文献   

10.
The linear-nonlinear cascade model (LN model) has proven very useful in representing a neural system’s encoding properties, but has proven less successful in reproducing the firing patterns of individual neurons whose behavior is strongly dependent on prior firing history. While the cell’s behavior can still usefully be considered as feature detection acting on a fluctuating input, some of the coding capacity of the cell is taken up by the increased firing rate due to a constant “driving” direct current (DC) stimulus. Furthermore, both the DC input and the post-spike refractory period generate regular firing, reducing the spike-timing entropy available for encoding time-varying fluctuations. In this paper, we address these issues, focusing on the example of motoneurons in which an afterhyperpolarization (AHP) current plays a dominant role regularizing firing behavior. We explore the accuracy and generalizability of several alternative models for single neurons under changes in DC and variance of the stimulus input. We use a motoneuron simulation to compare coding models in neurons with and without the AHP current. Finally, we quantify the tradeoff between instantaneously encoding information about fluctuations and about the DC.  相似文献   

11.
12.
Coloniality has mainly been studied from an evolutionary perspective, but relatively few studies have developed methods for modelling colony dynamics. Changes in number of colonies over time provide a useful tool for predicting and evaluating the responses of colonial species to management and to environmental disturbance. Probabilistic Markov process models have been recently used to estimate colony site dynamics using presence–absence data when all colonies are detected in sampling efforts. Here, we define and develop two general approaches for the modelling and analysis of colony dynamics for sampling situations in which all colonies are, and are not, detected. For both approaches, we develop a general probabilistic model for the data and then constrain model parameters based on various hypotheses about colony dynamics. We use Akaike's Information Criterion (AIC) to assess the adequacy of the constrained models. The models are parameterised with conditional probabilities of local colony site extinction and colonization. Presence–absence data arising from Pollock's robust capture–recapture design provide the basis for obtaining unbiased estimates of extinction, colonization, and detection probabilities when not all colonies are detected. This second approach should be particularly useful in situations where detection probabilities are heterogeneous among colony sites. The general methodology is illustrated using presence–absence data on two species of herons. Estimates of the extinction and colonization rates showed interspecific differences and strong temporal and spatial variations. We were also able to test specific predictions about colony dynamics based on ideas about habitat change and metapopulation dynamics. We recommend estimators based on probabilistic modelling for future work on colony dynamics. We also believe that this methodological framework has wide application to problems in animal ecology concerning metapopulation and community dynamics.  相似文献   

13.
Hidden Markov Models (HMMs) are practical tools which provide probabilistic base for protein secondary structure prediction. In these models, usually, only the information of the left hand side of an amino acid is considered. Accordingly, these models seem to be inefficient with respect to long range correlations. In this work we discuss a Segmental Semi Markov Model (SSMM) in which the information of both sides of amino acids are considered. It is assumed and seemed reasonable that the information on both sides of an amino acid can provide a suitable tool for measuring dependencies. We consider these dependencies by dividing them into shorter dependencies. Each of these dependency models can be applied for estimating the probability of segments in structural classes. Several conditional probabilities concerning dependency of an amino acid to the residues appeared on its both sides are considered. Based on these conditional probabilities a weighted model is obtained to calculate the probability of each segment in a structure. This results in 2.27% increase in prediction accuracy in comparison with the ordinary Segmental Semi Markov Models, SSMMs. We also compare the performance of our model with that of the Segmental Semi Markov Model introduced by Schmidler et al. [C.S. Schmidler, J.S. Liu, D.L. Brutlag, Bayesian segmentation of protein secondary structure, J. Comp. Biol. 7(1/2) (2000) 233-248]. The calculations show that the overall prediction accuracy of our model is higher than the SSMM introduced by Schmidler.  相似文献   

14.
Large‐scale biodiversity data are needed to predict species' responses to global change and to address basic questions in macroecology. While such data are increasingly becoming available, their analysis is challenging because of the typically large heterogeneity in spatial sampling intensity and the need to account for observation processes. Two further challenges are accounting for spatial effects that are not explained by covariates, and drawing inference on dynamics at these large spatial scales. We developed dynamic occupancy models to analyze large‐scale atlas data. In addition to occupancy, these models estimate local colonization and persistence probabilities. We accounted for spatial autocorrelation using conditional autoregressive models and autologistic models. We fitted the models to detection/nondetection data collected on a quarter‐degree grid across southern Africa during two atlas projects, using the hadeda ibis (Bostrychia hagedash) as an example. The model accurately reproduced the range expansion between the first (SABAP1: 1987–1992) and second (SABAP2: 2007–2012) Southern African Bird Atlas Project into the drier parts of interior South Africa. Grid cells occupied during SABAP1 generally remained occupied, but colonization of unoccupied grid cells was strongly dependent on the number of occupied grid cells in the neighborhood. The detection probability strongly varied across space due to variation in effort, observer identity, seasonality, and unexplained spatial effects. We present a flexible hierarchical approach for analyzing grid‐based atlas data using dynamical occupancy models. Our model is similar to a species' distribution model obtained using generalized additive models but has a number of advantages. Our model accounts for the heterogeneous sampling process, spatial correlation, and perhaps most importantly, allows us to examine dynamic aspects of species ranges.  相似文献   

15.
Fourteen Göttingen minipigs were trained on two different visually guided conditional associative tasks. In a spatial conditional task, a black stimulus signalled that a response to the left was correct, and a white stimulus signalled that a response to the right was correct. In a conditional go/no-go task, a blue stimulus signalled go, and a red stimulus signalled no-go. The pigs were trained until a behavioural criterion of 90% correct for each of two consecutive sessions. For the spatial conditional task, all pigs reached this criterion in 520 trials or less. For the conditional go/no-go task, all pigs, except three, reached this criterion in 1600 trials or less. Sows and boars learned equally fast. The tasks can be useful for the testing of cognitive function in pig models of human brain disorders.  相似文献   

16.
Datta S  Sundaram R 《Biometrics》2006,62(3):829-837
Multistage models are used to describe individuals (or experimental units) moving through a succession of "stages" corresponding to distinct states (e.g., healthy, diseased, diseased with complications, dead). The resulting data can be considered to be a form of multivariate survival data containing information about the transition times and the stages occupied. Traditional survival analysis is the simplest example of a multistage model, where individuals begin in an initial stage (say, alive) and move irreversibly to a second stage (death). In this article, we consider general multistage models with a directed tree structure (progressive models) in which individuals traverse through stages in a possibly non-Markovian manner. We construct nonparametric estimators of stage occupation probabilities and marginal cumulative transition hazards. Empirical calculations of these quantities are not possible due to the lack of complete data. We consider current status information which represents a more severe form of censoring than the commonly used right censoring. Asymptotic validity of our estimators can be justified using consistency results for nonparametric regression estimators. Finite-sample behavior of our estimators is studied by simulation, in which we show that our estimators based on these limited data compare well with those based on complete data. We also apply our method to a real-life data set arising from a cardiovascular diseases study in Taiwan.  相似文献   

17.
Bezzi M 《Bio Systems》2005,79(1-3):183-189
A central problem in neural coding is to understand what are the features of the stimulus that are encoded by the neural activity. Assuming that neuronal coding is optimized for information transmission, we can use mutual information maximization for extracting the relevant features encoded in certain activity patterns. We show that this algorithm can be successfully applied to the study of different encoding strategies for location and direction of movement in hippocampal and lateral septal cells. Using this approach, we find that in lateral septum, a significant amount of information about location can be encoded in patterns that are not place-fields.  相似文献   

18.
Kaiser MS  Caragea PC 《Biometrics》2009,65(3):857-865
Summary .  The application of Markov random field models to problems involving spatial data on lattice systems requires decisions regarding a number of important aspects of model structure. Existing exploratory techniques appropriate for spatial data do not provide direct guidance to an investigator about these decisions. We introduce an exploratory quantity that is directly tied to the structure of Markov random field models based on one-parameter exponential family conditional distributions. This exploratory diagnostic is shown to be a meaningful statistic that can inform decisions involved in modeling spatial structure with statistical dependence terms. In this article, we develop the diagnostic, illustrate its use in guiding modeling decisions with simulated examples, and reexamine a previously published application.  相似文献   

19.
In longitudinal studies where time to a final event is the ultimate outcome often information is available about intermediate events the individuals may experience during the observation period. Even though many extensions of the Cox proportional hazards model have been proposed to model such multivariate time-to-event data these approaches are still very rarely applied to real datasets. The aim of this paper is to illustrate the application of extended Cox models for multiple time-to-event data and to show their implementation in popular statistical software packages. We demonstrate a systematic way of jointly modelling similar or repeated transitions in follow-up data by analysing an event-history dataset consisting of 270 breast cancer patients, that were followed-up for different clinical events during treatment in metastatic disease. First, we show how this methodology can also be applied to non Markovian stochastic processes by representing these processes as "conditional" Markov processes. Secondly, we compare the application of different Cox-related approaches to the breast cancer data by varying their key model components (i.e. analysis time scale, risk set and baseline hazard function). Our study showed that extended Cox models are a powerful tool for analysing complex event history datasets since the approach can address many dynamic data features such as multiple time scales, dynamic risk sets, time-varying covariates, transition by covariate interactions, autoregressive dependence or intra-subject correlation.  相似文献   

20.
Perception relies on the response of populations of neurons in sensory cortex. How the response profile of a neuronal population gives rise to perception and perceptual discrimination has been conceptualized in various ways. Here we suggest that neuronal population responses represent information about our environment explicitly as Fisher information (FI), which is a local measure of the variance estimate of the sensory input. We show how this sensory information can be read out and combined to infer from the available information profile which stimulus value is perceived during a fine discrimination task. In particular, we propose that the perceived stimulus corresponds to the stimulus value that leads to the same information for each of the alternative directions, and compare the model prediction to standard models considered in the literature (population vector, maximum likelihood, maximum-a-posteriori Bayesian inference). The models are applied to human performance in a motion discrimination task that induces perceptual misjudgements of a target direction of motion by task irrelevant motion in the spatial surround of the target stimulus (motion repulsion). By using the neurophysiological insight that surround motion suppresses neuronal responses to the target motion in the center, all models predicted the pattern of perceptual misjudgements. The variation of discrimination thresholds (error on the perceived value) was also explained through the changes of the total FI content with varying surround motion directions. The proposed FI decoding scheme incorporates recent neurophysiological evidence from macaque visual cortex showing that perceptual decisions do not rely on the most active neurons, but rather on the most informative neuronal responses. We statistically compare the prediction capability of the FI decoding approach and the standard decoding models. Notably, all models reproduced the variation of the perceived stimulus values for different surrounds, but with different neuronal tuning characteristics underlying perception. Compared to the FI approach the prediction power of the standard models was based on neurons with far wider tuning width and stronger surround suppression. Our study demonstrates that perceptual misjudgements can be based on neuronal populations encoding explicitly the available sensory information, and provides testable neurophysiological predictions on neuronal tuning characteristics underlying human perceptual decisions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号