首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models.  相似文献   

2.
This article considers the parameter estimation of multi-fiber family models for biaxial mechanical behavior of passive arteries in the presence of the measurement errors. First, the uncertainty propagation due to the errors in variables has been carefully characterized using the constitutive model. Then, the parameter estimation of the artery model has been formulated into nonlinear least squares optimization with an appropriately chosen weight from the uncertainty model. The proposed technique is evaluated using multiple sets of synthesized data with fictitious measurement noises. The results of the estimation are compared with those of the conventional nonlinear least squares optimization without a proper weight factor. The proposed method significantly improves the quality of parameter estimation as the amplitude of the errors in variables becomes larger. We also investigate model selection criteria to decide the optimal number of fiber families in the multi-fiber family model with respect to the experimental data balancing between variance and bias errors.  相似文献   

3.
Static optimization approaches to estimating muscle tensions rely on the assumption that the muscle activity pattern is in some sense optimal. However, in the case of individuals with a neuromuscular impairment, this assumption is likely not to hold true. We present an approach to muscle tension estimation that does not rely on any optimality assumptions. First, the nature of the impairment is estimated by reformulating the relationship between the muscle tensions and the external forces produced in terms of the deviation from the expected activation in the unimpaired case. This formulation allows the information from several force production tasks to be treated as a single coupled system. In a second step, the identified impairments are used to obtain a novel cost function for the muscle tension estimation task. In a simulation study of the index finger, the proposed method resulted in muscle tension errors with a mean norm of 23.3?±?26.8% (percentage of the true solution norm), compared to 52.6?±?24.8% when solving the estimation task using a cost function consisting of the sum of squared muscle stresses. Performance was also examined as a function of the amount of error in the kinematic and muscle Jacobians and found to remain superior to the performance of the squared muscle stress cost function throughout the range examined.  相似文献   

4.
Thach CT  Fisher LD 《Biometrics》2002,58(2):432-438
In the design of clinical trials, the sample size for the trial is traditionally calculated from estimates of parameters of interest, such as the mean treatment effect, which can often be inaccurate. However, recalculation of the sample size based on an estimate of the parameter of interest that uses accumulating data from the trial can lead to inflation of the overall Type I error rate of the trial. The self-designing method of Fisher, also known as the variance-spending method, allows the use of all accumulating data in a sequential trial (including the estimated treatment effect) in determining the sample size for the next stage of the trial without inflating the Type I error rate. We propose a self-designing group sequential procedure to minimize the expected total cost of a trial. Cost is an important parameter to consider in the statistical design of clinical trials due to limited financial resources. Using Bayesian decision theory on the accumulating data, the design specifies sequentially the optimal sample size and proportion of the test statistic's variance needed for each stage of a trial to minimize the expected cost of the trial. The optimality is with respect to a prior distribution on the parameter of interest. Results are presented for a simple two-stage trial. This method can extend to nonmonetary costs, such as ethical costs or quality-adjusted life years.  相似文献   

5.
In many everyday situations, humans must make precise decisions in the presence of uncertain sensory information. For example, when asked to combine information from multiple sources we often assign greater weight to the more reliable information. It has been proposed that statistical-optimality often observed in human perception and decision-making requires that humans have access to the uncertainty of both their senses and their decisions. However, the mechanisms underlying the processes of uncertainty estimation remain largely unexplored. In this paper we introduce a novel visual tracking experiment that requires subjects to continuously report their evolving perception of the mean and uncertainty of noisy visual cues over time. We show that subjects accumulate sensory information over the course of a trial to form a continuous estimate of the mean, hindered only by natural kinematic constraints (sensorimotor latency etc.). Furthermore, subjects have access to a measure of their continuous objective uncertainty, rapidly acquired from sensory information available within a trial, but limited by natural kinematic constraints and a conservative margin for error. Our results provide the first direct evidence of the continuous mean and uncertainty estimation mechanisms in humans that may underlie optimal decision making.  相似文献   

6.
This paper demonstrates methods for the online optimization of assistive robotic devices such as powered prostheses, orthoses and exoskeletons. Our algorithms estimate the value of a physiological objective in real-time (with a body “in-the-loop”) and use this information to identify optimal device parameters. To handle sensor data that are noisy and dynamically delayed, we rely on a combination of dynamic estimation and response surface identification. We evaluated three algorithms (Steady-State Cost Mapping, Instantaneous Cost Mapping, and Instantaneous Cost Gradient Search) with eight healthy human subjects. Steady-State Cost Mapping is an established technique that fits a cubic polynomial to averages of steady-state measures at different parameter settings. The optimal parameter value is determined from the polynomial fit. Using a continuous sweep over a range of parameters and taking into account measurement dynamics, Instantaneous Cost Mapping identifies a cubic polynomial more quickly. Instantaneous Cost Gradient Search uses a similar technique to iteratively approach the optimal parameter value using estimates of the local gradient. To evaluate these methods in a simple and repeatable way, we prescribed step frequency via a metronome and optimized this frequency to minimize metabolic energetic cost. This use of step frequency allows a comparison of our results to established techniques and enables others to replicate our methods. Our results show that all three methods achieve similar accuracy in estimating optimal step frequency. For all methods, the average error between the predicted minima and the subjects’ preferred step frequencies was less than 1% with a standard deviation between 4% and 5%. Using Instantaneous Cost Mapping, we were able to reduce subject walking-time from over an hour to less than 10 minutes. While, for a single parameter, the Instantaneous Cost Gradient Search is not much faster than Steady-State Cost Mapping, the Instantaneous Cost Gradient Search extends favorably to multi-dimensional parameter spaces.  相似文献   

7.
This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior.  相似文献   

8.
This paper proposes a new optimization strategy to estimate nitrifiable nitrogen concentration in wastewater, nitrification rate, denitrification rate and/or COD available for denitrification of an activated sludge process submitted to intermittent aeration. The approach uses the oxydo-reduction potential and dissolved oxygen measurements only. The parameter identification is based on a Simplex optimization of a cost function related to the error between an experimental cycle (an aerobic period followed by an anoxic one) and a simulation of a reduced model derived from ASM1. Results show very good prediction of experimental oxygen, ammonium and nitrate profiles. The estimation of nitrifiable nitrogen and removal rates has been validated both on simulated data obtained from COST action 624 benchmark and on experimental data.  相似文献   

9.
We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*) 相似文献   

10.
An epidemic model for rabies in raccoons is formulated with discrete time and spatial features. The goal is to analyze the strategies for optimal distribution of vaccine baits to minimize the spread of the disease and the cost of implementing the control. Discrete optimal control techniques are used to derive the optimality system, which is then solved numerically to illustrate various scenarios.  相似文献   

11.
Zhou X  Joseph L  Wolfson DB  Bélisle P 《Biometrics》2003,59(4):1082-1088
Summary . Suppose that the true model underlying a set of data is one of a finite set of candidate models, and that parameter estimation for this model is of primary interest. With this goal, optimal design must depend on a loss function across all possible models. A common method that accounts for model uncertainty is to average the loss over all models; this is the basis of what is known as Läuter's criterion. We generalize Läuter's criterion and show that it can be placed in a Bayesian decision theoretic framework, by extending the definition of Bayesian A‐optimality. We use this generalized A‐optimality to find optimal design points in an environmental safety setting. In estimating the smallest detectable trace limit in a water contamination problem, we obtain optimal designs that are quite different from those suggested by standard A‐optimality.  相似文献   

12.
Mathematical modeling of complex gene expression programs is an emerging tool for understanding disease mechanisms. However, identification of large models sometimes requires training using qualitative, conflicting or even contradictory data sets. One strategy to address this challenge is to estimate experimentally constrained model ensembles using multiobjective optimization. In this study, we used Pareto Optimal Ensemble Techniques (POETs) to identify a family of proof-of-concept signal transduction models. POETs integrate Simulated Annealing (SA) with Pareto optimality to identify models near the optimal tradeoff surface between competing training objectives. We modeled a prototypical-signaling network using mass-action kinetics within an ordinary differential equation (ODE) framework (64 ODEs in total). The true model was used to generate synthetic immunoblots from which the POET algorithm identified the 117 unknown model parameters. POET generated an ensemble of signaling models, which collectively exhibited population-like behavior. For example, scaled gene expression levels were approximately normally distributed over the ensemble following the addition of extracellular ligand. Also, the ensemble recovered robust and fragile features of the true model, despite significant parameter uncertainty. Taken together, these results suggest that experimentally constrained model ensembles could capture qualitatively important network features without exact parameter information.  相似文献   

13.
A quantitative model of optimal coordination between hand transport and grip aperture has been derived in our previous studies of reach-to-grasp movements without utilizing explicit knowledge of the optimality criterion or motor plant dynamics. The model’s utility for experimental data analysis has been demonstrated. Here we show how to generalize this model for a broad class of reaching-type, goal-directed movements. The model allows for measuring the variability of motor coordination and studying its dependence on movement phase. The experimentally found characteristics of that dependence imply that execution noise is low and does not affect motor coordination significantly. From those characteristics it is inferred that the cost of neural computations required for information acquisition and processing is included in the criterion of task performance optimality as a function of precision demand for state estimation and decision making. The precision demand is an additional optimized control variable that regulates the amount of neurocomputational resources activated dynamically. It is shown that an optimal control strategy in this case comprises two different phases. During the initial phase, the cost of neural computations is significantly reduced at the expense of reducing the demand for their precision, which results in speed-accuracy tradeoff violation and significant inter-trial variability of motor coordination. During the final phase, neural computations and thus motor coordination are considerably more precise to reduce the cost of errors in making a contact with the target object. The generality of the optimal coordination model and the two-phase control strategy is illustrated on several diverse examples.  相似文献   

14.
HIV/AIDS disease continues to spread alarmingly despite the huge amounts of resources invested in fighting it. There is a need to integrate the series of control measures available to ensure a consistent reduction in the incidence of the disease pending the discovery of its cure. We present a deterministic model for controlling the spread of the disease using change in sexual habits and antiretroviral (ARV) therapy as control measures. We formulate a fixed time optimal control problem subject to the model dynamics with the goal of finding the optimal combination of the two control measures that will minimize the cost of the control efforts as well as the incidence of the disease. We estimate the model state initial conditions and parameter values from the demographic and HIV/AIDS data of South Africa. We use Pontryagin's maximum principle to derive the optimality system and solve the system numerically. Compared with the practice in most resource-limited settings where ARV treatment is given only to patients with full-blown AIDS, our simulation results suggest that starting the treatment as soon as the patients progress to the pre-AIDS stage of the disease coupled with appreciable change in the susceptible individuals' sexual habits reduces both the incidence and prevalence of the disease faster. In fact, the results predict that the implementation of the proposed strategy would drive new cases of the disease towards eradication in 10 years.  相似文献   

15.
Neural signals are corrupted by noise and this places limits on information processing. We review the processes involved in goal-directed movements and how neural noise and uncertainty determine aspects of our behaviour. First, noise in sensory signals limits perception. We show that, when localizing our hand, the central nervous system (CNS) integrates visual and proprioceptive information, each with different noise properties, in a way that minimizes the uncertainty in the overall estimate. Second, noise in motor commands leads to inaccurate movements. We review an optimal-control framework, known as 'task optimization in the presence of signal-dependent noise', which assumes that movements are planned so as to minimize the deleterious consequences of noise and thereby minimize inaccuracy. Third, during movement, sensory and motor signals have to be integrated to allow estimation of the body's state. Models are presented that show how these signals are optimally combined. Finally, we review how the CNS deals with noise at the neural and network levels. In all of these processes, the CNS carries out the tasks in such a way that the detrimental effects of noise are minimized. This shows that it is important to consider effects at the neural level in order to understand performance at the behavioural level.  相似文献   

16.
A framework for the online optimization of protein induction using green fluorescent protein (GFP)-monitoring technology was developed for high-cell-density cultivation of Escherichia coli. A simple and unstructured mathematical model was developed that described well the dynamics of cloned chloramphenicol acetyltransferase (CAT) production in E. coli JM105 was developed. A sequential quadratic programming (SQP) optimization algorithm was used to estimate model parameter values and to solve optimal open-loop control problems for piecewise control of inducer feed rates that maximize productivity. The optimal inducer feeding profile for an arabinose induction system was different from that of an isopropyl-beta-D-thiogalactopyranoside (IPTG) induction system. Also, model-based online parameter estimation and online optimization algorithms were developed to determine optimal inducer feeding rates for eventual use of a feedback signal from a GFP fluorescence probe (direct product monitoring with 95-minute time delay). Because the numerical algorithms required minimal processing time, the potential for product-based and model-based online optimal control methodology can be realized.  相似文献   

17.
基于观测数据的陆地生态系统模型参数估计有助于提高模型的模拟和预测能力,降低模拟不确定性.在已有参数估计研究中,涡度相关技术测定的净生态系统碳交换量(NEE)数据的随机误差通常被假设为服从零均值的正态分布.然而近年来已有研究表明NEE数据的随机误差更服从双指数分布.为探讨NEE观测误差分布类型的不同选择对陆地生态系统机理模型参数估计以及碳通量模拟结果造成的差异,以长白山温带阔叶红松林为研究区域,采用马尔可夫链-蒙特卡罗方法,利用2003~2005年测定的NEE数据对陆地生态系统机理模型CEVSA2的敏感参数进行估计,对比分析了两种误差分布类型(正态分布和双指数分布)的参数估计结果以及碳通量模拟的差异.结果表明,基于正态观测误差模拟的总初级生产力和生态系统呼吸的年总量分别比基于双指数观测误差的模拟结果高61~86 g C m-2 a-1和107~116 g C m-2 a-1,导致前者模拟的NEE年总量较后者低29~47 g C m-2 a-1,特别在生长旺季期间有明显低估.在参数估计研究中,不能忽略观测误差的分布类型以及相应的目标函数的选择,它们的不合理设置可能对参数估计以及模拟结果产生较大影响.  相似文献   

18.
HIV/AIDS disease continues to spread alarmingly despite the huge amounts of resources invested in fighting it. There is a need to integrate the series of control measures available to ensure a consistent reduction in the incidence of the disease pending the discovery of its cure. We present a deterministic model for controlling the spread of the disease using change in sexual habits and antiretroviral (ARV) therapy as control measures. We formulate a fixed time optimal control problem subject to the model dynamics with the goal of finding the optimal combination of the two control measures that will minimize the cost of the control efforts as well as the incidence of the disease. We estimate the model state initial conditions and parameter values from the demographic and HIV/AIDS data of South Africa. We use Pontryagin's maximum principle to derive the optimality system and solve the system numerically. Compared with the practice in most resource-limited settings where ARV treatment is given only to patients with full-blown AIDS, our simulation results suggest that starting the treatment as soon as the patients progress to the pre-AIDS stage of the disease coupled with appreciable change in the susceptible individuals’ sexual habits reduces both the incidence and prevalence of the disease faster. In fact, the results predict that the implementation of the proposed strategy would drive new cases of the disease towards eradication in 10 years.  相似文献   

19.

Background

An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC) procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error.

Methodology/Principal Findings

Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals.

Conclusions/Significance

It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed.  相似文献   

20.

Background

Epidemiological interventions aim to control the spread of infectious disease through various mechanisms, each carrying a different associated cost.

Methodology

We describe a flexible statistical framework for generating optimal epidemiological interventions that are designed to minimize the total expected cost of an emerging epidemic while simultaneously propagating uncertainty regarding the underlying disease model parameters through to the decision process. The strategies produced through this framework are adaptive: vaccination schedules are iteratively adjusted to reflect the anticipated trajectory of the epidemic given the current population state and updated parameter estimates.

Conclusions

Using simulation studies based on a classic influenza outbreak, we demonstrate the advantages of adaptive interventions over non-adaptive ones, in terms of cost and resource efficiency, and robustness to model misspecification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号