首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Humans have been shown to combine noisy sensory information with previous experience (priors), in qualitative and sometimes quantitative agreement with the statistically-optimal predictions of Bayesian integration. However, when the prior distribution becomes more complex than a simple Gaussian, such as skewed or bimodal, training takes much longer and performance appears suboptimal. It is unclear whether such suboptimality arises from an imprecise internal representation of the complex prior, or from additional constraints in performing probabilistic computations on complex distributions, even when accurately represented. Here we probe the sources of suboptimality in probabilistic inference using a novel estimation task in which subjects are exposed to an explicitly provided distribution, thereby removing the need to remember the prior. Subjects had to estimate the location of a target given a noisy cue and a visual representation of the prior probability density over locations, which changed on each trial. Different classes of priors were examined (Gaussian, unimodal, bimodal). Subjects'' performance was in qualitative agreement with the predictions of Bayesian Decision Theory although generally suboptimal. The degree of suboptimality was modulated by statistical features of the priors but was largely independent of the class of the prior and level of noise in the cue, suggesting that suboptimality in dealing with complex statistical features, such as bimodality, may be due to a problem of acquiring the priors rather than computing with them. We performed a factorial model comparison across a large set of Bayesian observer models to identify additional sources of noise and suboptimality. Our analysis rejects several models of stochastic behavior, including probability matching and sample-averaging strategies. Instead we show that subjects'' response variability was mainly driven by a combination of a noisy estimation of the parameters of the priors, and by variability in the decision process, which we represent as a noisy or stochastic posterior.  相似文献   

2.
Suppose we are interested in the effect of a treatment in a clinical trial. The efficiency of inference may be limited due to small sample size. However, external control data are often available from historical studies. Motivated by an application to Helicobacter pylori infection, we show how to borrow strength from such data to improve efficiency of inference in the clinical trial. Under an exchangeability assumption about the potential outcome mean, we show that the semiparametric efficiency bound for estimating the average treatment effect can be reduced by incorporating both the clinical trial data and external controls. We then derive a doubly robust and locally efficient estimator. The improvement in efficiency is prominent especially when the external control data set has a large sample size and small variability. Our method allows for a relaxed overlap assumption, and we illustrate with the case where the clinical trial only contains a treated group. We also develop doubly robust and locally efficient approaches that extrapolate the causal effect in the clinical trial to the external population and the overall population. Our results also offer a meaningful implication for trial design and data collection. We evaluate the finite-sample performance of the proposed estimators via simulation. In the Helicobacter pylori infection application, our approach shows that the combination treatment has potential efficacy advantages over the triple therapy.  相似文献   

3.
Sensory stimulation can systematically bias the perceived passage of time, but why and how this happens is mysterious. In this report, we provide evidence that such biases may ultimately derive from an innate and adaptive use of stochastically evolving dynamic stimuli to help refine estimates derived from internal timekeeping mechanisms. A simplified statistical model based on probabilistic expectations of stimulus change derived from the second-order temporal statistics of the natural environment makes three predictions. First, random noise-like stimuli whose statistics violate natural expectations should induce timing bias. Second, a previously unexplored obverse of this effect is that similar noise stimuli with natural statistics should reduce the variability of timing estimates. Finally, this reduction in variability should scale with the interval being timed, so as to preserve the overall Weber law of interval timing. All three predictions are borne out experimentally. Thus, in the context of our novel theoretical framework, these results suggest that observers routinely rely on sensory input to augment their sense of the passage of time, through a process of Bayesian inference based on expectations of change in the natural environment.  相似文献   

4.
Williams LJ  Dunlop JP  Abdi H 《PloS one》2012,7(5):e36161
As we age, our differences in cognitive skills become more visible, an effect especially true for memory and problem solving skills (i.e., fluid intelligence). However, by contrast with fluid intelligence, few studies have examined variability in measures that rely on one's world knowledge (i.e., crystallized intelligence). The current study investigated whether age increased the variability in text based global inference generation--a measure of crystallized intelligence. Global inference generation requires the integration of textual information and world knowledge and can be expressed as a gist or lesson. Variability in generating two global inferences for a single text was examined in young-old (62 to 69 years), middle-old (70 to 76 years) and old-old (77 to 94 years) adults. The older two groups showed greater variability, with the middle elderly group being most variable. These findings suggest that variability may be a characteristic of both fluid and crystallized intelligence in aging.  相似文献   

5.
The selective forces acting on a protein-coding gene are commonly inferred using evolutionary codon models by contrasting the rate of nonsynonymous substitutions to the rate of synonymous substitutions. These models usually assume that the synonymous substitution rate, Ks, is homogenous across all sites, which is justified if synonymous sites are free from selection. However, a growing body of evidence indicates that the DNA and RNA levels of protein-coding genes are subject to varying degrees of selective constraints due to various biological functions encoded at these levels. In this paper, we develop evolutionary models that account for these layers of selection by allowing for both among-site variability of substitution rates at the DNA/RNA level (which leads to Ks variability among protein-coding sites) and among-site variability of substitution rates at the protein level (Ka variability). These models are constructed so that positive selection is either allowed or not. This enables statistical testing of positive selection when variability at the DNA/RNA substitution rate is accounted for. Using this methodology, we show that variability of the baseline DNA/RNA substitution rate is a widespread phenomenon in coding sequence data of mammalian genomes, most likely reflecting varying degrees of selection at the DNA and RNA levels. Additionally, we use simulations to examine the impact that accounting for the variability of the baseline DNA/RNA substitution rate has on the inference of positive selection. Our results show that ignoring this variability results in a high rate of erroneous positive-selection inference. Our newly developed model, which accounts for this variability, does not suffer from this problem and hence provides a likelihood framework for the inference of positive selection on a background of variability in the baseline DNA/RNA substitution rate.  相似文献   

6.
The heterogeneity in mammalian cells signaling response is largely a result of pre‐existing cell‐to‐cell variability. It is unknown whether cell‐to‐cell variability rises from biochemical stochastic fluctuations or distinct cellular states. Here, we utilize calcium response to adenosine trisphosphate as a model for investigating the structure of heterogeneity within a population of cells and analyze whether distinct cellular response states coexist. We use a functional definition of cellular state that is based on a mechanistic dynamical systems model of calcium signaling. Using Bayesian parameter inference, we obtain high confidence parameter value distributions for several hundred cells, each fitted individually. Clustering the inferred parameter distributions revealed three major distinct cellular states within the population. The existence of distinct cellular states raises the possibility that the observed variability in response is a result of structured heterogeneity between cells. The inferred parameter distribution predicts, and experiments confirm that variability in IP3R response explains the majority of calcium heterogeneity. Our work shows how mechanistic models and single‐cell parameter fitting can uncover hidden population structure and demonstrate the need for parameter inference at the single‐cell level.  相似文献   

7.
An interpretation for the ROC curve and inference using GLM procedures   总被引:7,自引:0,他引:7  
Pepe MS 《Biometrics》2000,56(2):352-359
The accuracy of a medical diagnostic test is often summarized in a receiver operating characteristic (ROC) curve. This paper puts forth an interpretation for each point on the ROC curve as being a conditional probability of a test result from a random diseased subject exceeding that from a random nondiseased subject. This interpretation gives rise to new methods for making inference about ROC curves. It is shown that inference can be achieved with binary regression techniques applied to indicator variables constructed from pairs of test results, one component of the pair being from a diseased subject and the other from a nondiseased subject. Within the generalized linear model (GLM) binary regression framework, ROC curves can be estimated, and we highlight a new semiparametric estimator. Covariate effects can also be evaluated with the GLM models. The methodology is applied to a pancreatic cancer dataset where we use the regression framework to compare two different serum biomarkers. Asymptotic distribution theory is developed to facilitate inference and to provide insight into factors influencing variability of estimated model parameters.  相似文献   

8.
Summary We consider inference for data from a clinical trial of treatments for metastatic prostate cancer. Patients joined the trial with diverse prior treatment histories. The resulting heterogeneous patient population gives rise to challenging statistical inference problems when trying to predict time to progression on different treatment arms. Inference is further complicated by the need to include a longitudinal marker as a covariate. To address these challenges, we develop a semiparametric model for joint inference of longitudinal data and an event time. The proposed approach includes the possibility of cure for some patients. The event time distribution is based on a nonparametric Pólya tree prior. For the longitudinal data we assume a mixed effects model. Incorporating a regression on covariates in a nonparametric event time model in general, and for a Pólya tree model in particular, is a challenging problem. We exploit the fact that the covariate itself is a random variable. We achieve an implementation of the desired regression by factoring the joint model for the event time and the longitudinal outcome into a marginal model for the event time and a regression of the longitudinal outcomes on the event time, i.e., we implicitly model the desired regression by modeling the reverse conditional distribution.  相似文献   

9.
In the linear model for cross‐over trials, with fixed subject effects and normal i.i.d. random errors, the residual variability corresponds to the intraindividual variability. While population variances are in general unknown, an estimate can be derived that follows a gamma distribution, where the scale parameter is based on the true unknown variability. This gamma distribution is often used for the sample size calculation for trial planning with the precision approach, where the aim is to achieve in the next trial a predefined precision with a given probability. But then the imprecision in the estimated residual variability or, from a Bayesian perspective, the uncertainty of the unknown variability is not taken into account. Here, we present the predictive distribution for the residual variability, and we investigate a link to the F distribution. The consequence is that in the precision approach more subjects will be necessary than with the conventional calculation. For values of the intraindividual variability that are typical of human pharmacokinetics, that is a gCV of 17–36%, we would need approximately a sixth more subjects.  相似文献   

10.
Solutions to challenging inference problems are often subject to a fundamental trade-off between: 1) bias (being systematically wrong) that is minimized with complex inference strategies, and 2) variance (being oversensitive to uncertain observations) that is minimized with simple inference strategies. However, this trade-off is based on the assumption that the strategies being considered are optimal for their given complexity and thus has unclear relevance to forms of inference based on suboptimal strategies. We examined inference problems applied to rare, asymmetrically available evidence, which a large population of human subjects solved using a diverse set of strategies that varied in form and complexity. In general, subjects using more complex strategies tended to have lower bias and variance, but with a dependence on the form of strategy that reflected an inversion of the classic bias-variance trade-off: subjects who used more complex, but imperfect, Bayesian-like strategies tended to have lower variance but higher bias because of incorrect tuning to latent task features, whereas subjects who used simpler heuristic strategies tended to have higher variance because they operated more directly on the observed samples but lower, near-normative bias. Our results help define new principles that govern individual differences in behavior that depends on rare-event inference and, more generally, about the information-processing trade-offs that can be sensitive to not just the complexity, but also the optimality, of the inference process.  相似文献   

11.
In most circumstances, probability sampling is the only way to ensure unbiased inference about population quantities where a complete census is not possible. As we enter the era of ‘big data’, however, nonprobability samples, whose sampling mechanisms are unknown, are undergoing a renaissance. We explain why the use of nonprobability samples can lead to spurious conclusions, and why seemingly large nonprobability samples can be (effectively) very small. We also review some recent controversies surrounding the use of nonprobability samples in biodiversity monitoring. These points notwithstanding, we argue that nonprobability samples can be useful, provided that their limitations are assessed, mitigated where possible and clearly communicated. Ecologists can learn much from other disciplines on each of these fronts.  相似文献   

12.
Valid surrogate endpoints S can be used as a substitute for a true outcome of interest T to measure treatment efficacy in a clinical trial. We propose a causal inference approach to validate a surrogate by incorporating longitudinal measurements of the true outcomes using a mixed modeling approach, and we define models and quantities for validation that may vary across the study period using principal surrogacy criteria. We consider a surrogate-dependent treatment efficacy curve that allows us to validate the surrogate at different time points. We extend these methods to accommodate a delayed-start treatment design where all patients eventually receive the treatment. Not all parameters are identified in the general setting. We apply a Bayesian approach for estimation and inference, utilizing more informative prior distributions for selected parameters. We consider the sensitivity of these prior assumptions as well as assumptions of independence among certain counterfactual quantities conditional on pretreatment covariates to improve identifiability. We examine the frequentist properties (bias of point and variance estimates, credible interval coverage) of a Bayesian imputation method. Our work is motivated by a clinical trial of a gene therapy where the functional outcomes are measured repeatedly throughout the trial.  相似文献   

13.
Genome-scale constraint-based models of several organisms have now been constructed and are being used for model driven research. A key issue that may arise in the use of such models is the existence of alternate optimal solutions wherein the same maximal objective (e.g., growth rate) can be achieved through different flux distributions. Herein, we investigate the effects that alternate optimal solutions may have on the predicted range of flux values calculated using currently practiced linear (LP) and quadratic programming (QP) methods. An efficient LP-based strategy is described to calculate the range of flux variability that can be present in order to achieve optimal as well as suboptimal objective states. Sample results are provided for growth predictions of E. coli using glucose, acetate, and lactate as carbon substrates. These results demonstrate the extent of flux variability to be highly dependent on environmental conditions and network composition. In addition we examined the impact of alternate optima for growth under gene knockout conditions as calculated using QP-based methods. It was observed that calculations using QP-based methods can show significant variation in growth rate if the flux variability among alternate optima is high. The underlying biological significance and general source of such flux variability is further investigated through the identification of redundancies in the network (equivalent reaction sets) that lead to alternate solutions. Collectively, these results illustrate the variability inherent in metabolic flux distributions and the possible implications of this heterogeneity for constraint-based modeling approaches. These methods also provide an efficient and robust method to calculate the range of flux distributions that can be derived from quantitative fermentation data.  相似文献   

14.
In genomic research phenotype transformations are commonly used as a straightforward way to reach normality of the model outcome. Many researchers still believe it to be necessary for proper inference. Using regression simulations, we show that phenotype transformations are typically not needed and, when used in phenotype with heteroscedasticity, result in inflated Type I error rates. We further explain that important is to address a combination of rare variant genotypes and heteroscedasticity. Incorrectly estimated parameter variability or incorrect choice of the distribution of the underlying test statistic provide spurious detection of associations. We conclude that it is a combination of heteroscedasticity, minor allele frequency, sample size, and to a much lesser extent the error distribution, that matter for proper statistical inference.  相似文献   

15.
《Biophysical journal》2022,121(10):1919-1930
Despite major environmental and genetic differences, microbial metabolic networks are known to generate consistent physiological outcomes across vastly different organisms. This remarkable robustness suggests that, at least in bacteria, metabolic activity may be guided by universal principles. The constrained optimization of evolutionarily motivated objective functions, such as the growth rate, has emerged as the key theoretical assumption for the study of bacterial metabolism. While conceptually and practically useful in many situations, the idea that certain functions are optimized is hard to validate in data. Moreover, it is not always clear how optimality can be reconciled with the high degree of single-cell variability observed in experiments within microbial populations. To shed light on these issues, we develop an inverse modeling framework that connects the fitness of a population of cells (represented by the mean single-cell growth rate) to the underlying metabolic variability through the maximum entropy inference of the distribution of metabolic phenotypes from data. While no clear objective function emerges, we find that, as the medium gets richer, the fitness and inferred variability for Escherichia coli populations follow and slowly approach the theoretically optimal bound defined by minimal reduction of variability at given fitness. These results suggest that bacterial metabolism may be crucially shaped by a population-level trade-off between growth and heterogeneity.  相似文献   

16.
Yin G  Shen Y 《Biometrics》2005,61(2):362-369
Clinical trial designs involving correlated data often arise in biomedical research. The intracluster correlation needs to be taken into account to ensure the validity of sample size and power calculations. In contrast to the fixed-sample designs, we propose a flexible trial design with adaptive monitoring and inference procedures. The total sample size is not predetermined, but adaptively re-estimated using observed data via a systematic mechanism. The final inference is based on a weighted average of the block-wise test statistics using generalized estimating equations, where the weight for each block depends on cumulated data from the ongoing trial. When there are no significant treatment effects, the devised stopping rule allows for early termination of the trial and acceptance of the null hypothesis. The proposed design updates information regarding both the effect size and within-cluster correlation based on the cumulated data in order to achieve a desired power. Estimation of the parameter of interest and its confidence interval are proposed. We conduct simulation studies to examine the operating characteristics and illustrate the proposed method with an example.  相似文献   

17.
Doubly robust estimation in missing data and causal inference models   总被引:3,自引:0,他引:3  
Bang H  Robins JM 《Biometrics》2005,61(4):962-973
The goal of this article is to construct doubly robust (DR) estimators in ignorable missing data and causal inference models. In a missing data model, an estimator is DR if it remains consistent when either (but not necessarily both) a model for the missingness mechanism or a model for the distribution of the complete data is correctly specified. Because with observational data one can never be sure that either a missingness model or a complete data model is correct, perhaps the best that can be hoped for is to find a DR estimator. DR estimators, in contrast to standard likelihood-based or (nonaugmented) inverse probability-weighted estimators, give the analyst two chances, instead of only one, to make a valid inference. In a causal inference model, an estimator is DR if it remains consistent when either a model for the treatment assignment mechanism or a model for the distribution of the counterfactual data is correctly specified. Because with observational data one can never be sure that a model for the treatment assignment mechanism or a model for the counterfactual data is correct, inference based on DR estimators should improve upon previous approaches. Indeed, we present the results of simulation studies which demonstrate that the finite sample performance of DR estimators is as impressive as theory would predict. The proposed method is applied to a cardiovascular clinical trial.  相似文献   

18.
In this article we propose to use a semiparametric mixed-effects model based on an exploratory analysis of clinical trial data for a study of the relation between virologic responses and immunologic markers such as CD4+ and CD8 counts, and host-specific factors in AIDS clinical trials. The regression spline technique, used for inference for parameters in the model, reduces the unknown nonparametric components to parametric functions. It is simple and straightforward to implement the procedures using readily available software, and parameter inference can be developed from standard parametric models. We apply the model and the proposed method to an AIDS clinical study. Our findings indicate that viral load level is positively related to baseline viral load level, negatively related to CD4+ cell counts, but unrelated to CD8 cell counts and patient's age neither.  相似文献   

19.
Causal inference methods--mainly path analysis and structural equation modeling--offer plant physiologists information about cause-and-effect relationships among plant traits. Recently, an unusual approach to causal inference through stepwise variable selection has been proposed and used in various works on plant physiology. The approach should not be considered correct from a biological point of view. Here, it is explained why stepwise variable selection should not be used for causal inference, and shown what strange conclusions can be drawn based upon the former analysis when one aims to interpret cause-and-effect relationships among plant traits.  相似文献   

20.
We have previously tried to explain perceptual inference and learning under a free-energy principle that pursues Helmholtz’s agenda to understand the brain in terms of energy minimization. It is fairly easy to show that making inferences about the causes of sensory data can be cast as the minimization of a free-energy bound on the likelihood of sensory inputs, given an internal model of how they were caused. In this article, we consider what would happen if the data themselves were sampled to minimize this bound. It transpires that the ensuing active sampling or inference is mandated by ergodic arguments based on the very existence of adaptive agents. Furthermore, it accounts for many aspects of motor behavior; from retinal stabilization to goal-seeking. In particular, it suggests that motor control can be understood as fulfilling prior expectations about proprioceptive sensations. This formulation can explain why adaptive behavior emerges in biological agents and suggests a simple alternative to optimal control theory. We illustrate these points using simulations of oculomotor control and then apply to same principles to cued and goal-directed movements. In short, the free-energy formulation may provide an alternative perspective on the motor control that places it in an intimate relationship with perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号