首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 968 毫秒
1.
Humans have been shown to combine noisy sensory information with previous experience (priors), in qualitative and sometimes quantitative agreement with the statistically-optimal predictions of Bayesian integration. However, when the prior distribution becomes more complex than a simple Gaussian, such as skewed or bimodal, training takes much longer and performance appears suboptimal. It is unclear whether such suboptimality arises from an imprecise internal representation of the complex prior, or from additional constraints in performing probabilistic computations on complex distributions, even when accurately represented. Here we probe the sources of suboptimality in probabilistic inference using a novel estimation task in which subjects are exposed to an explicitly provided distribution, thereby removing the need to remember the prior. Subjects had to estimate the location of a target given a noisy cue and a visual representation of the prior probability density over locations, which changed on each trial. Different classes of priors were examined (Gaussian, unimodal, bimodal). Subjects'' performance was in qualitative agreement with the predictions of Bayesian Decision Theory although generally suboptimal. The degree of suboptimality was modulated by statistical features of the priors but was largely independent of the class of the prior and level of noise in the cue, suggesting that suboptimality in dealing with complex statistical features, such as bimodality, may be due to a problem of acquiring the priors rather than computing with them. We performed a factorial model comparison across a large set of Bayesian observer models to identify additional sources of noise and suboptimality. Our analysis rejects several models of stochastic behavior, including probability matching and sample-averaging strategies. Instead we show that subjects'' response variability was mainly driven by a combination of a noisy estimation of the parameters of the priors, and by variability in the decision process, which we represent as a noisy or stochastic posterior.  相似文献   

2.
In most QTL mapping studies, phenotypes are assumed to follow normal distributions. Deviations from this assumption may lead to detection of false positive QTL. To improve the robustness of Bayesian QTL mapping methods, the normal distribution for residuals is replaced with a skewed Student-t distribution. The latter distribution is able to account for both heavy tails and skewness, and both components are each controlled by a single parameter. The Bayesian QTL mapping method using a skewed Student-t distribution is evaluated with simulated data sets under five different scenarios of residual error distributions and QTL effects.  相似文献   

3.
In this paper, we present a generic approach that can be used to infer how subjects make optimal decisions under uncertainty. This approach induces a distinction between a subject's perceptual model, which underlies the representation of a hidden "state of affairs" and a response model, which predicts the ensuing behavioural (or neurophysiological) responses to those inputs. We start with the premise that subjects continuously update a probabilistic representation of the causes of their sensory inputs to optimise their behaviour. In addition, subjects have preferences or goals that guide decisions about actions given the above uncertain representation of these hidden causes or state of affairs. From a Bayesian decision theoretic perspective, uncertain representations are so-called "posterior" beliefs, which are influenced by subjective "prior" beliefs. Preferences and goals are encoded through a "loss" (or "utility") function, which measures the cost incurred by making any admissible decision for any given (hidden) state of affair. By assuming that subjects make optimal decisions on the basis of updated (posterior) beliefs and utility (loss) functions, one can evaluate the likelihood of observed behaviour. Critically, this enables one to "observe the observer", i.e. identify (context- or subject-dependent) prior beliefs and utility-functions using psychophysical or neurophysiological measures. In this paper, we describe the main theoretical components of this meta-Bayesian approach (i.e. a Bayesian treatment of Bayesian decision theoretic predictions). In a companion paper ('Observing the observer (II): deciding when to decide'), we describe a concrete implementation of it and demonstrate its utility by applying it to simulated and real reaction time data from an associative learning task.  相似文献   

4.
Despite the vital importance of our ability to accurately process and encode temporal information, the underlying neural mechanisms are largely unknown. We have previously described a theoretical framework that explains how temporal representations, similar to those reported in the visual cortex, can form in locally recurrent cortical networks as a function of reward modulated synaptic plasticity. This framework allows networks of both linear and spiking neurons to learn the temporal interval between a stimulus and paired reward signal presented during training. Here we use a mean field approach to analyze the dynamics of non-linear stochastic spiking neurons in a network trained to encode specific time intervals. This analysis explains how recurrent excitatory feedback allows a network structure to encode temporal representations.  相似文献   

5.
Most existing statistical methods for mapping quantitative trait loci (QTL) are not suitable for analyzing survival traits with a skewed distribution and censoring mechanism. As a result, researchers incorporate parametric and semi-parametric models of survival analysis into the framework of the interval mapping for QTL controlling survival traits. In survival analysis, accelerated failure time (AFT) model is considered as a de facto standard and fundamental model for data analysis. Based on AFT model, we propose a parametric approach for mapping survival traits using the EM algorithm to obtain the maximum likelihood estimates of the parameters. Also, with Bayesian information criterion (BIC) as a model selection criterion, an optimal mapping model is constructed by choosing specific error distributions with maximum likelihood and parsimonious parameters. Two real datasets were analyzed by our proposed method for illustration. The results show that among the five commonly used survival distributions, Weibull distribution is the optimal survival function for mapping of heading time in rice, while Log-logistic distribution is the optimal one for hyperoxic acute lung injury.  相似文献   

6.
Often in biomedical studies, the routine use of linear mixed‐effects models (based on Gaussian assumptions) can be questionable when the longitudinal responses are skewed in nature. Skew‐normal/elliptical models are widely used in those situations. Often, those skewed responses might also be subjected to some upper and lower quantification limits (QLs; viz., longitudinal viral‐load measures in HIV studies), beyond which they are not measurable. In this paper, we develop a Bayesian analysis of censored linear mixed models replacing the Gaussian assumptions with skew‐normal/independent (SNI) distributions. The SNI is an attractive class of asymmetric heavy‐tailed distributions that includes the skew‐normal, skew‐t, skew‐slash, and skew‐contaminated normal distributions as special cases. The proposed model provides flexibility in capturing the effects of skewness and heavy tail for responses that are either left‐ or right‐censored. For our analysis, we adopt a Bayesian framework and develop a Markov chain Monte Carlo algorithm to carry out the posterior analyses. The marginal likelihood is tractable, and utilized to compute not only some Bayesian model selection measures but also case‐deletion influence diagnostics based on the Kullback–Leibler divergence. The newly developed procedures are illustrated with a simulation study as well as an HIV case study involving analysis of longitudinal viral loads.  相似文献   

7.
Several areas of the brain are known to participate in temporal processing. Neurons in the prefrontal cortex (PFC) are thought to contribute to perception of time intervals. However, it remains unclear whether the PFC itself can generate time intervals independently of external stimuli. Here we describe a group of PFC neurons in area 9 that became active when monkeys recognized a particular elapsed time within the range of 1-7 seconds. Another group of area 9 neurons became active only when subjects reproduced a specific interval without external cues. Both types of neurons were individually tuned to recognize or reproduce particular intervals. Moreover, the injection of muscimol, a GABA agonist, into this area bilaterally resulted in an increase in the error rate during time interval reproduction. These results suggest that area 9 may process multi-second intervals not only in perceptual recognition, but also in internal generation of time intervals.  相似文献   

8.
This study is aimed at improving the analysis of data used in identifying marker-associated effects on quantitative traits, specifically to account for possible departures from a Gaussian distribution of the trait data and to allow for asymmetry of marker effects attributable to phenotypic divergence between parental lines. A Bayesian procedure for analysing marker effects at the whole-genome level is presented. The procedure adopts a skewed t-distribution as a prior distribution of marker effects. The model with the skewed t-process includes Gaussian prior distributions, skewed Gaussian prior distributions and symmetric t-distributions as special cases. A Markov Chain Monte Carlo algorithm for obtaining marginal posterior distributions of the unknowns is also presented. The method was applied to a dataset on three traits (live weight, carcass length and backfat depth) measured in an F2 cross between Iberian and Landrace pigs. The distribution of marker effects was clearly asymmetric for carcass length and backfat depth, whereas it was symmetric for live weight. The t-distribution seems more appropriate for describing the distribution of marker effects on backfat depth.  相似文献   

9.
Asymmetry of Early Paleozoic trilobites   总被引:1,自引:1,他引:0  
Asymmetry in fossils can arise through a variety of biological and geological mechanisms. If geological sources of asymmetry can be minimized or factored out, it might be possible to assess biological sources of asymmetry. Fluctuating asymmetry (FA), a general measure of developmental precision, is documented for nine species of lower Paleozoic trilobites. Taphonomic analyses suggest that the populations studied for each taxon span relatively short time intervals that are approximately equal in duration. Tectonic deformation may have affected the specimens studied, since deviations from normal distributions are common. Several measures of FA were applied to 3–5 homologous measures in each taxon. Measurement error was assessed by the analysis of variance (ANOVA) for repeated measurements of individual specimens and by analysis of the statistical moments of the distributions of asymmetry measures. Measurement error was significantly smaller than the difference between measures taken on each side of a specimen. However, the distribution of differences between sides often deviated from a mean of zero, or was skewed or kurtosic. Regression of levels of FA against geologic age revealed no statistically significant changes in levels of asymmetry through time. Geological and taphonomic effects make it difficult to identify asymmetry due to biological factors. Although fluctuating asymmetry is a function of both intrinsic and extrinsic factors, the results suggest that early Cambrian trilobites possessed genetic or developmental mechanisms used to maintain developmental stability comparable to those of younger trilobites. Although the measures are biased by time averaging and deviations from the normal distribution, these data do not lend strong support to 'genomic' hypotheses that have been suggested to control the tempo of the Cambrian radiation.  相似文献   

10.
This paper introduces a Bayesian approach for composite quantile regression employing the skewed Laplace distribution for the error distribution. We use a two-level hierarchical Bayesian model for coefficient estimation and future selection which assumes a prior distribution that favors sparseness. An efficient Gibbs sampling algorithm is developed to update the unknown quantities from the posteriors. The proposed approach is illustrated via simulation studies and two real datasets. Results indicate that the proposed approach performs quite good in comparison to the other approaches.  相似文献   

11.
Classical multivariate mixed models that acknowledge the correlation of patients through the incorporation of normal error terms are widely used in cohort studies. Violation of the normality assumption can make the statistical inference vague. In this paper, we propose a Bayesian parametric approach by relaxing this assumption and substituting some flexible distributions in fitting multivariate mixed models. This strategy allows for the skewness and the heavy tails of error‐term distributions and thus makes inferences robust to the violation. This approach uses flexible skew‐elliptical distributions, including skewed, fat, or thin‐tailed distributions, and imposes the normal model as a special case. We use real data obtained from a prospective cohort study on the low back pain to illustrate the usefulness of our proposed approach.  相似文献   

12.
In this paper we present a nonparametric Bayesian approach for fitting unsmooth or highly oscillating functions in regression models with binary responses. The approach extends previous work by Lang et al. for Gaussian responses. Nonlinear functions are modelled by first or second order random walk priors with locally varying variances or smoothing parameters. Estimation is fully Bayesian and uses latent utility representations of binary regression models for efficient block sampling from the full conditionals of nonlinear functions.  相似文献   

13.
14.
15.
In a companion paper [1], we have presented a generic approach for inferring how subjects make optimal decisions under uncertainty. From a Bayesian decision theoretic perspective, uncertain representations correspond to "posterior" beliefs, which result from integrating (sensory) information with subjective "prior" beliefs. Preferences and goals are encoded through a "loss" (or "utility") function, which measures the cost incurred by making any admissible decision for any given (hidden or unknown) state of the world. By assuming that subjects make optimal decisions on the basis of updated (posterior) beliefs and utility (loss) functions, one can evaluate the likelihood of observed behaviour. In this paper, we describe a concrete implementation of this meta-Bayesian approach (i.e. a Bayesian treatment of Bayesian decision theoretic predictions) and demonstrate its utility by applying it to both simulated and empirical reaction time data from an associative learning task. Here, inter-trial variability in reaction times is modelled as reflecting the dynamics of the subjects' internal recognition process, i.e. the updating of representations (posterior densities) of hidden states over trials while subjects learn probabilistic audio-visual associations. We use this paradigm to demonstrate that our meta-Bayesian framework allows for (i) probabilistic inference on the dynamics of the subject's representation of environmental states, and for (ii) model selection to disambiguate between alternative preferences (loss functions) human subjects could employ when dealing with trade-offs, such as between speed and accuracy. Finally, we illustrate how our approach can be used to quantify subjective beliefs and preferences that underlie inter-individual differences in behaviour.  相似文献   

16.
Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton''s conjecture on the distribution of judgments, Muth''s rational expectations hypothesis, and Page''s diversity prediction theorem.  相似文献   

17.
Confidence intervals for the mean of one sample and the difference in means of two independent samples based on the ordinary-t statistic suffer deficiencies when samples come from skewed families. In this article we evaluate several existing techniques and propose new methods to improve coverage accuracy. The methods examined include the ordinary-t, the bootstrap-t, the biased-corrected acceleration and three new intervals based on transformation of the t-statistic. Our study shows that our new transformation intervals and the bootstrap-t intervals give best coverage accuracy for a variety of skewed distributions, and that our new transformation intervals have shorter interval lengths.  相似文献   

18.
Elizabeth R. Brown 《Biometrics》2010,66(4):1266-1274
Summary We present a Bayesian model to estimate the time‐varying sensitivity of a diagnostic assay when the assay is given repeatedly over time, disease status is changing, and the gold standard is only partially observed. The model relies on parametric assumptions for the distribution of the latent time of disease onset and the time‐varying sensitivity. Additionally, we illustrate the incorporation of historical data for constructing prior distributions. We apply the new methods to data collected in a study of mother‐to‐child transmission of HIV and include a covariate for sensitivity to assess whether two different assays have different sensitivity profiles.  相似文献   

19.
Studies of auditory temporal resolution in birds have traditionally examined processing capabilities by assessing behavioral discrimination of sounds varying in temporal structure. Here, temporal resolution of the brown-headed cowbird (Molothrus ater) was measured using two auditory evoked potential (AEP)-based methods: auditory brainstem responses (ABRs) to paired clicks and envelope following responses (EFRs) to amplitude-modulated tones. The basic patterns observed in cowbirds were similar to those found in other songbird species, suggesting similar temporal processing capabilities. The amplitude of the ABR to the second click was less than that of the first click at inter-click intervals less than 10 ms, and decreased to 30% at an interval of 1 ms. EFR amplitude was generally greatest at modulation frequencies from 335 to 635 Hz and decreased at higher and lower modulation frequencies. Compared to data from terrestrial mammals these results support recent behavioral findings of enhanced temporal resolution in birds. General agreement between these AEP results and behaviorally based studies suggests that AEPs can provide a useful assessment of temporal resolution in wild bird species.  相似文献   

20.
To assess the importance of variation in observer effort between and within bird atlas projects and demonstrate the use of relatively simple conditional autoregressive (CAR) models for analyzing grid‐based atlas data with varying effort. Pennsylvania and West Virginia, United States of America. We used varying proportions of randomly selected training data to assess whether variations in observer effort can be accounted for using CAR models and whether such models would still be useful for atlases with incomplete data. We then evaluated whether the application of these models influenced our assessment of distribution change between two atlas projects separated by twenty years (Pennsylvania), and tested our modeling methodology on a state bird atlas with incomplete coverage (West Virginia). Conditional Autoregressive models which included observer effort and landscape covariates were able to make robust predictions of species distributions in cases of sparse data coverage. Further, we found that CAR models without landscape covariates performed favorably. These models also account for variation in observer effort between atlas projects and can have a profound effect on the overall assessment of distribution change. Accounting for variation in observer effort in atlas projects is critically important. CAR models provide a useful modeling framework for accounting for variation in observer effort in bird atlas data because they are relatively simple to apply, and quick to run.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号