首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Humans have been shown to combine noisy sensory information with previous experience (priors), in qualitative and sometimes quantitative agreement with the statistically-optimal predictions of Bayesian integration. However, when the prior distribution becomes more complex than a simple Gaussian, such as skewed or bimodal, training takes much longer and performance appears suboptimal. It is unclear whether such suboptimality arises from an imprecise internal representation of the complex prior, or from additional constraints in performing probabilistic computations on complex distributions, even when accurately represented. Here we probe the sources of suboptimality in probabilistic inference using a novel estimation task in which subjects are exposed to an explicitly provided distribution, thereby removing the need to remember the prior. Subjects had to estimate the location of a target given a noisy cue and a visual representation of the prior probability density over locations, which changed on each trial. Different classes of priors were examined (Gaussian, unimodal, bimodal). Subjects'' performance was in qualitative agreement with the predictions of Bayesian Decision Theory although generally suboptimal. The degree of suboptimality was modulated by statistical features of the priors but was largely independent of the class of the prior and level of noise in the cue, suggesting that suboptimality in dealing with complex statistical features, such as bimodality, may be due to a problem of acquiring the priors rather than computing with them. We performed a factorial model comparison across a large set of Bayesian observer models to identify additional sources of noise and suboptimality. Our analysis rejects several models of stochastic behavior, including probability matching and sample-averaging strategies. Instead we show that subjects'' response variability was mainly driven by a combination of a noisy estimation of the parameters of the priors, and by variability in the decision process, which we represent as a noisy or stochastic posterior.  相似文献   

2.
We present a source localization method for electroencephalographic (EEG) and magnetoencephalographic (MEG) data which is based on an estimate of the sparsity obtained through the eigencanceler (EIG), which is a spatial filter whose weights are constrained to lie in the noise subspace. The EIG provides rejection of directional interferences while minimizing noise contributions and maintaining specified beam pattern constraints. In our case, the EIG is used to estimate the sparsity of the signal as a function of the position, then we use this information to spatially restrict the neural sources to locations out of the sparsity maxima. As proof of the concept, we incorporate this restriction in the “classical” linearly constrained minimum variance (LCMV) source localization approach in order to enhance its performance. We present numerical examples to evaluate the proposed method using realistically simulated EEG/MEG data for different signal-to-noise (SNR) conditions and various levels of correlation between sources, as well as real EEG/MEG measurements of median nerve stimulation. Our results show that the proposed method has the potential of reducing the bias on the search of neural sources in the classical approach, as well as making it more effective in localizing correlated sources.  相似文献   

3.
Nathan P. Lemoine 《Oikos》2019,128(7):912-928
Throughout the last two decades, Bayesian statistical methods have proliferated throughout ecology and evolution. Numerous previous references established both philosophical and computational guidelines for implementing Bayesian methods. However, protocols for incorporating prior information, the defining characteristic of Bayesian philosophy, are nearly nonexistent in the ecological literature. Here, I hope to encourage the use of weakly informative priors in ecology and evolution by providing a ‘consumer's guide’ to weakly informative priors. The first section outlines three reasons why ecologists should abandon noninformative priors: 1) common flat priors are not always noninformative, 2) noninformative priors provide the same result as simpler frequentist methods, and 3) noninformative priors suffer from the same high type I and type M error rates as frequentist methods. The second section provides a guide for implementing informative priors, wherein I detail convenient ‘reference’ prior distributions for common statistical models (i.e. regression, ANOVA, hierarchical models). I then use simulations to visually demonstrate how informative priors influence posterior parameter estimates. With the guidelines provided here, I hope to encourage the use of weakly informative priors for Bayesian analyses in ecology. Ecologists can and should debate the appropriate form of prior information, but should consider weakly informative priors as the new ‘default’ prior for any Bayesian model.  相似文献   

4.
Bayesian inference allows the transparent communication and systematic updating of model uncertainty as new data become available. When applied to material flow analysis (MFA), however, Bayesian inference is undermined by the difficulty of defining proper priors for the MFA parameters and quantifying the noise in the collected data. We start to address these issues by first deriving and implementing an expert elicitation procedure suitable for generating MFA parameter priors. Second, we propose to learn the data noise concurrent with the parametric uncertainty. These methods are demonstrated using a case study on the 2012 US steel flow. Eight experts are interviewed to elicit distributions on steel flow uncertainty from raw materials to intermediate goods. The experts' distributions are combined and weighted according to the expertise demonstrated in response to seeding questions. These aggregated distributions form our model parameters' informative priors. Sensible, weakly informative priors are adopted for learning the data noise. Bayesian inference is then performed to update the parametric and data noise uncertainty given MFA data collected from the United States Geological Survey and the World Steel Association. The results show a reduction in MFA parametric uncertainty when incorporating the collected data. Only a modest reduction in data noise uncertainty was observed using 2012 data; however, greater reductions were achieved when using data from multiple years in the inference. These methods generate transparent MFA and data noise uncertainties learned from data rather than pre-assumed data noise levels, providing a more robust basis for decision-making that affects the system.  相似文献   

5.
Agresti A  Min Y 《Biometrics》2005,61(2):515-523
This article investigates the performance, in a frequentist sense, of Bayesian confidence intervals (CIs) for the difference of proportions, relative risk, and odds ratio in 2 x 2 contingency tables. We consider beta priors, logit-normal priors, and related correlated priors for the two binomial parameters. The goal was to analyze whether certain settings for prior parameters tend to provide good coverage performance regardless of the true association parameter values. For the relative risk and odds ratio, we recommend tail intervals over highest posterior density (HPD) intervals, for invariance reasons. To protect against potentially very poor coverage probabilities when the effect is large, it is best to use a diffuse prior, and we recommend the Jeffreys prior. Otherwise, with relatively small samples, Bayesian CIs using more informative (even uniform) priors tend to have poorer performance than the frequentist CIs based on inverting score tests, which perform uniformly quite well for these parameters.  相似文献   

6.
The ability to reliably locate sound sources is critical to anurans, which navigate acoustically complex breeding choruses when choosing mates. Yet, the factors influencing sound localization performance in frogs remain largely unexplored. We applied two complementary methodologies, open and closed loop playback trials, to identify influences on localization abilities in Cope’s gray treefrog, Hyla chrysoscelis. We examined localization acuity and phonotaxis behavior of females in response to advertisement calls presented from 12 azimuthal angles, at two signal levels, in the presence and absence of noise, and at two noise levels. Orientation responses were consistent with precise localization of sound sources, rather than binary discrimination between sources on either side of the body (lateralization). Frogs were unable to discriminate between sounds arriving from forward and rearward directions, and accurate localization was limited to forward sound presentation angles. Within this region, sound presentation angle had little effect on localization acuity. The presence of noise and low signal-to-noise ratios also did not strongly impair localization ability in open loop trials, but females exhibited reduced phonotaxis performance consistent with impaired localization during closed loop trials. We discuss these results in light of previous work on spatial hearing in anurans.  相似文献   

7.
ObjectivesAdaptive steepest descent projection onto convex sets (ASD-POCS) algorithms with Lp-norm (0 < p ≤ 1) regularization have shown great promise in sparse-view X-ray CT reconstruction. However, the difference in p value selection can lead to varying algorithm performance of noise and resolution. Therefore, it is imperative to develop a reliable method to evaluate the resolution and noise properties of the ASD-POCS algorithms under different Lp-norm priors.MethodsA comparative performance evaluation of ASD-POCS algorithms under different Lp-norm (0 < p ≤ 2) priors were performed in terms of modulation transfer function (MTF), noise power spectrum (NPS) and noise equivalent quanta (NEQ). Simulation data sets from the EGSnrc/BEAMnrc Monte Carlo system and an actual mouse data set were used for algorithms comparison.ResultsA considerable MTF improvement can be achieved with the decrement of p. L1 regularization based algorithm obtains the best noise performance, and shows superiority in NEQ evaluation. The advantage of L1-norm prior is also confirmed by the reconstructions from the actual mouse data set through contrast to noise ratio (CNR) comparison.ConclusionAlthough the ASD-POCS algorithms using small Lp-norm (p ≤ 0.5) priors yield a higher MTF than do the high Lp-norms, the best noise-resolution performance is achieved when p is between 0.8 and 1. The results are expected to be a reference to the choice of p in Lp-norm (0 < p ≤ 2) regularization.  相似文献   

8.
The objective Bayesian approach relies on the construction of prior distributions that reflect ignorance. When topologies are considered equally probable a priori, clades cannot be. Shifting justifications have been offered for the use of uniform topological priors in Bayesian inference. These include: (i) topological priors do not inappropriately influence Bayesian inference when they are uniform; (ii) although clade priors are not uniform, their undesirable influence is negated by the likelihood function, even when data sets are small; and (iii) the influence of nonuniform clade priors is an appropriate reflection of knowledge. The first two justifications have been addressed previously: the first is false, and the second was found to be questionable. The third and most recent justification is inconsistent with the first two, and with the objective Bayesian philosophy itself. Thus, there has been no coherent justification for the use of nonflat clade priors in Bayesian phylogenetics. We discuss several solutions: (i) Bayesian inference can be abandoned in favour of other methods of phylogenetic inference; (ii) the objective Bayesian philosophy can be abandoned in favour of a subjective interpretation; (iii) the topology with the greatest posterior probability, which is also the tree of greatest marginal likelihood, can be accepted as optimal, with clade support estimated using other means; or (iv) a Bayes factor, which accounts for differences in priors among competing hypotheses, can be used to assess the weight of evidence in support of clades.
©The Willi Hennig Society 2009  相似文献   

9.
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass.  相似文献   

10.
Wolfinger RD  Kass RE 《Biometrics》2000,56(3):768-774
We consider the usual normal linear mixed model for variance components from a Bayesian viewpoint. With conjugate priors and balanced data, Gibbs sampling is easy to implement; however, simulating from full conditionals can become difficult for the analysis of unbalanced data with possibly nonconjugate priors, thus leading one to consider alternative Markov chain Monte Carlo schemes. We propose and investigate a method for posterior simulation based on an independence chain. The method is customized to exploit the structure of the variance component model, and it works with arbitrary prior distributions. As a default reference prior, we use a version of Jeffreys' prior based on the integrated (restricted) likelihood. We demonstrate the ease of application and flexibility of this approach in familiar settings involving both balanced and unbalanced data.  相似文献   

11.
We propose INvariance of Noise (INN) space as a novel method for source localization of magnetoencephalography (MEG) data. The method is based on the fact that modulations of source strengths across time change the energy in signal subspace but leave the noise subspace invariant. We compare INN with classical MUSIC, RAP-MUSIC, and beamformer approaches using simulated data while varying signal-to-noise ratios as well as distance and temporal correlation between two sources. We also demonstrate the utility of INN with actual auditory evoked MEG responses in eight subjects. In all cases, INN performed well, especially when the sources were closely spaced, highly correlated, or one source was considerably stronger than the other.  相似文献   

12.

Background

Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized.

Methods

EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated.

Results

While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms.  相似文献   

13.
Simulated data were used to investigate the influence of the choice of priors on estimation of genetic parameters in multivariate threshold models using Gibbs sampling. We simulated additive values, residuals and fixed effects for one continuous trait and liabilities of four binary traits, and QTL effects for one of the liabilities. Within each of four replicates six different datasets were generated which resembled different practical scenarios in horses with respect to number and distribution of animals with trait records and availability of QTL information. (Co)Variance components were estimated using a Bayesian threshold animal model via Gibbs sampling. The Gibbs sampler was implemented with both a flat and a proper prior for the genetic covariance matrix. Convergence problems were encountered in > 50% of flat prior analyses, with indications of potential or near posterior impropriety between about round 10 000 and 100 000. Terminations due to non-positive definite genetic covariance matrix occurred in flat prior analyses of the smallest datasets. Use of a proper prior resulted in improved mixing and convergence of the Gibbs chain. In order to avoid (near) impropriety of posteriors and extremely poorly mixing Gibbs chains, a proper prior should be used for the genetic covariance matrix when implementing the Gibbs sampler.  相似文献   

14.
This paper develops Bayesian sample size formulae for experiments comparing two groups, where relevant preexperimental information from multiple sources can be incorporated in a robust prior to support both the design and analysis. We use commensurate predictive priors for borrowing of information and further place Gamma mixture priors on the precisions to account for preliminary belief about the pairwise (in)commensurability between parameters that underpin the historical and new experiments. Averaged over the probability space of the new experimental data, appropriate sample sizes are found according to criteria that control certain aspects of the posterior distribution, such as the coverage probability or length of a defined density region. Our Bayesian methodology can be applied to circumstances that compare two normal means, proportions, or event times. When nuisance parameters (such as variance) in the new experiment are unknown, a prior distribution can further be specified based on preexperimental data. Exact solutions are available based on most of the criteria considered for Bayesian sample size determination, while a search procedure is described in cases for which there are no closed-form expressions. We illustrate the application of our sample size formulae in the design of clinical trials, where pretrial information is available to be leveraged. Hypothetical data examples, motivated by a rare-disease trial with an elicited expert prior opinion, and a comprehensive performance evaluation of the proposed methodology are presented.  相似文献   

15.
We used tensor-based morphometry (TBM) to: 1) map gray matter (GM) volume changes associated with motor learning in young healthy individuals; 2) evaluate if GM changes persist three months after cessation of motor training; and 3) assess whether the use of different schemes of motor training during the learning phase could lead to volume modifications of specific GM structures. From 31 healthy subjects, motor functional assessment and brain 3D T1-weighted sequence were obtained: before motor training (time 0), at the end of training (two weeks) (time 2), and three months later (time 3). Fifteen subjects (group A) were trained with goal-directed motor sequences, and 16 (group B) with non purposeful motor actions of the right hand. At time 1 vs. time 0, the whole sample of subjects had GM volume increase in regions of the temporo-occipital lobes, inferior parietal lobule (IPL) and middle frontal gyrus, while at time 2 vs. time 1, an increased GM volume in the middle temporal gyrus was seen. At time 1 vs. time 0, compared to group B, group A had a GM volume increase of the hippocampi, while the opposite comparison showed greater GM volume increase in the IPL and insula in group B vs. group A. Motor learning results in structural GM changes of different brain areas which are part of specific neuronal networks and tend to persist after training is stopped. The scheme applied during the learning phase influences the pattern of such structural changes.  相似文献   

16.
Cook AR  Gibson GJ  Gilligan CA 《Biometrics》2008,64(3):860-868
Summary .   This article describes a method for choosing observation times for stochastic processes to maximise the expected information about their parameters. Two commonly used models for epidemiological processes are considered: a simple death process and a susceptible-infected (SI) epidemic process with dual sources for infection spreading within and from outwith the population. The search for the optimal design uses Bayesian computational methods to explore the joint parameter-data-design space, combined with a method known as moment closure to approximate the likelihood to make the acceptance step efficient. For the processes considered, a small number of optimally chosen observations are shown to yield almost as much information as much more intensively observed schemes that are commonly used in epidemiological experiments. Analysis of the simple death process allows a comparison between the full Bayesian approach and locally optimal designs around a point estimate from the prior based on asymptotic results. The robustness of the approach to misspecified priors is demonstrated for the SI epidemic process, for which the computational intractability of the likelihood precludes locally optimal designs. We show that optimal designs derived by the Bayesian approach are similar for observational studies of a single epidemic and for studies involving replicated epidemics in independent subpopulations. Different optima result, however, when the objective is to maximise the gain in information based on informative and non-informative priors: this has implications when an experiment is designed to convince a naïve or sceptical observer rather than consolidate the belief of an informed observer. Some extensions to the methods, including the selection of information criteria and extension to other epidemic processes with transition probabilities, are briefly addressed.  相似文献   

17.
Random effects selection in linear mixed models   总被引:2,自引:0,他引:2  
Chen Z  Dunson DB 《Biometrics》2003,59(4):762-769
We address the important practical problem of how to select the random effects component in a linear mixed model. A hierarchical Bayesian model is used to identify any random effect with zero variance. The proposed approach reparameterizes the mixed model so that functions of the covariance parameters of the random effects distribution are incorporated as regression coefficients on standard normal latent variables. We allow random effects to effectively drop out of the model by choosing mixture priors with point mass at zero for the random effects variances. Due to the reparameterization, the model enjoys a conditionally linear structure that facilitates the use of normal conjugate priors. We demonstrate that posterior computation can proceed via a simple and efficient Markov chain Monte Carlo algorithm. The methods are illustrated using simulated data and real data from a study relating prenatal exposure to polychlorinated biphenyls and psychomotor development of children.  相似文献   

18.
We propose a novel Bayesian approach that robustifies genomic modeling by leveraging expert knowledge (EK) through prior distributions. The central component is the hierarchical decomposition of phenotypic variation into additive and nonadditive genetic variation, which leads to an intuitive model parameterization that can be visualized as a tree. The edges of the tree represent ratios of variances, for example broad-sense heritability, which are quantities for which EK is natural to exist. Penalized complexity priors are defined for all edges of the tree in a bottom-up procedure that respects the model structure and incorporates EK through all levels. We investigate models with different sources of variation and compare the performance of different priors implementing varying amounts of EK in the context of plant breeding. A simulation study shows that the proposed priors implementing EK improve the robustness of genomic modeling and the selection of the genetically best individuals in a breeding program. We observe this improvement in both variety selection on genetic values and parent selection on additive values; the variety selection benefited the most. In a real case study, EK increases phenotype prediction accuracy for cases in which the standard maximum likelihood approach did not find optimal estimates for the variance components. Finally, we discuss the importance of EK priors for genomic modeling and breeding, and point to future research areas of easy-to-use and parsimonious priors in genomic modeling.  相似文献   

19.
In this paper, we report genome size (GS) values for nine cockroaches (order Blattodea, families Blattidae, Blaberidae and Ectobiidae, ex Blattelidae), three of which are original additions to the ten already present in the GS database: the death’s head roach (Blaberus craniifer), the Surinam cockroach (Pycnoscelus surinamensis) and the Madeira cockroach (Leucophaea maderae). Regarding the American cockroach (Periplaneta americana), the GS database contains two contrasting values (2.72 vs 3.41 pg); likely, the 2.72 pg value is the correct one as it is strikingly similar to our sperm DNA content evaluation (2.80 ± 0.11 pg). Also, we suggest halving the published GS of the Argentine cockroach Blaptica dubia and the spotted cockroach (the gray cockroach) Nauphoeta cinerea discussing i) the occurrence of a correlation between increasing 2n chromosome number and GS within the order Blattodea; and ii) the possible occurrence of a polyploidization phenomenon doubling a basic GS of 0.58 pg of some termite families (superfamily Blattoidea, epifamily Termitoidae).Key words: genome size, C-DNA content, cockroaches, Blattodea  相似文献   

20.
As systems biology approaches to virology have become more tractable, highly studied viruses such as HIV can now be analyzed in new unbiased ways, including spatial proteomics. We employed here a differential centrifugation protocol to fractionate Jurkat T cells for proteomic analysis by mass spectrometry; these cells contain inducible HIV-1 genomes, enabling us to look for changes in the spatial proteome induced by viral gene expression. Using these proteomics data, we evaluated the merits of several reported machine learning pipelines for classification of the spatial proteome and identification of protein translocations. From these analyses, we found that classifier performance in this system was organelle dependent, with Bayesian t-augmented Gaussian mixture modeling outperforming support vector machine learning for mitochondrial and endoplasmic reticulum proteins but underperforming on cytosolic, nuclear, and plasma membrane proteins by QSep analysis. We also observed a generally higher performance for protein translocation identification using a Bayesian model, Bayesian analysis of differential localization experiments, on row-normalized data. Comparative Bayesian analysis of differential localization experiment analysis of cells induced to express the WT viral genome versus cells induced to express a genome unable to express the accessory protein Nef identified known Nef-dependent interactors such as T-cell receptor signaling components and coatomer complex. Finally, we found that support vector machine classification showed higher consistency and was less sensitive to HIV-dependent noise. These findings illustrate important considerations for studies of the spatial proteome following viral infection or viral gene expression and provide a reference for future studies of HIV-gene-dropout viruses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号