首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider a stage-structured model of a harvested fish population and we are interested in the problem of estimating the unknown stock state for each class. The model used in this work to describe the dynamical evolution of the population is a discrete time system including a nonlinear recruitment relationship. To estimate the stock state, we build an observer for the considered fish model. This observer is an auxiliary dynamical system that uses the catch data over each time interval and gives a dynamical estimate of the stock state for each stage class. The observer works well even if the recruitment function in the considered model is not well known. The same problem for an age-structured model has been addressed in a previous work (Ngom et al., Math. Biosci. Eng. 5(2):337–354, 2008).  相似文献   

2.
Having previously introduced the mathematical framework of topological metabolic analysis (TMA) - a novel optimization-based technique for modeling metabolic networks of arbitrary size and complexity - we demonstrate how TMA facilitates unique methods of metabolic interrogation. With the aid of several hybridoma metabolic investigations as case-studies (Bonarius et al., 1995, 1996, 2001), we first establish that the TMA framework identifies biologically important aspects of the metabolic network under investigation. We also show that the use of a structured weighting approach within our objective provides a substantial modeling benefit over an unstructured, uniform, weighting approach. We then illustrate the strength of TAM as an advanced interrogation technique, first by using TMA to prove the existence of (and to quantitatively describe) multiple topologically distinct configurations of a metabolic network that each optimally model a given set of experimental observations. We further show that such alternate topologies are indistinguishable using existing stoichiometric modeling techniques, and we explain the biological significance of the topological variables appearing within our model. By leveraging the manner in which TMA implements metabolite inputs and outputs, we also show that metabolites whose possible metabolic fates are inadequately described by a given network reconstruction can be quickly identified. Lastly, we show how the use of the TMA aggregate objective function (AOF) permits the identification of modeling solutions that can simultaneously consider experimental observations, underlying biological motivations, or even purely engineering- or design-based goals.  相似文献   

3.
Targeted maximum likelihood estimation is a versatile tool for estimating parameters in semiparametric and nonparametric models. We work through an example applying targeted maximum likelihood methodology to estimate the parameter of a marginal structural model. In the case we consider, we show how this can be easily done by clever use of standard statistical software. We point out differences between targeted maximum likelihood estimation and other approaches (including estimating function based methods). The application we consider is to estimate the effect of adherence to antiretroviral medications on virologic failure in HIV positive individuals.  相似文献   

4.
Prediction of protein secondary structure is an important step towards elucidating its three dimensional structure and its function. This is a challenging problem in bioinformatics. Segmental semi Markov models (SSMMs) are one of the best studied methods in this field. However, incorporating evolutionary information to these methods is somewhat difficult. On the other hand, the systems of multiple neural networks (NNs) are powerful tools for multi-class pattern classification which can easily be applied to take these sorts of information into account.To overcome the weakness of SSMMs in prediction, in this work we consider a SSMM as a decision function on outputs of three NNs that uses multiple sequence alignment profiles. We consider four types of observations for outputs of a neural network. Then profile table related to each sequence is reduced to a sequence of four observations. In order to predict secondary structure of each amino acid we need to consider a decision function. We use an SSMM on outputs of three neural networks. The proposed SSMM has discriminative power and weights over different dependency models for outputs of neural networks. The results show that the accuracy of our model in predictions, particularly for strands, is considerably increased.  相似文献   

5.
Pathway‐based feature selection algorithms, which utilize biological information contained in pathways to guide which features/genes should be selected, have evolved quickly and become widespread in the field of bioinformatics. Based on how the pathway information is incorporated, we classify pathway‐based feature selection algorithms into three major categories—penalty, stepwise forward, and weighting. Compared to the first two categories, the weighting methods have been underutilized even though they are usually the simplest ones. In this article, we constructed three different genes’ connectivity information‐based weights for each gene and then conducted feature selection upon the resulting weighted gene expression profiles. Using both simulations and a real‐world application, we have demonstrated that when the data‐driven connectivity information constructed from the data of specific disease under study is considered, the resulting weighted gene expression profiles slightly outperform the original expression profiles. In summary, a big challenge faced by the weighting method is how to estimate pathway knowledge‐based weights more accurately and precisely. Only until the issue is conquered successfully will wide utilization of the weighting methods be impossible.  相似文献   

6.
Identifying a biomarker or treatment-dose threshold that marks a specified level of risk is an important problem, especially in clinical trials. In view of this goal, we consider a covariate-adjusted threshold-based interventional estimand, which happens to equal the binary treatment–specific mean estimand from the causal inference literature obtained by dichotomizing the continuous biomarker or treatment as above or below a threshold. The unadjusted version of this estimand was considered in Donovan et al.. Expanding upon Stitelman et al., we show that this estimand, under conditions, identifies the expected outcome of a stochastic intervention that sets the treatment dose of all participants above the threshold. We propose a novel nonparametric efficient estimator for the covariate-adjusted threshold-response function for the case of informative outcome missingness, which utilizes machine learning and targeted minimum-loss estimation (TMLE). We prove the estimator is efficient and characterize its asymptotic distribution and robustness properties. Construction of simultaneous 95% confidence bands for the threshold-specific estimand across a set of thresholds is discussed. In the Supporting Information, we discuss how to adjust our estimator when the biomarker is missing at random, as occurs in clinical trials with biased sampling designs, using inverse probability weighting. Efficiency and bias reduction of the proposed estimator are assessed in simulations. The methods are employed to estimate neutralizing antibody thresholds for virologically confirmed dengue risk in the CYD14 and CYD15 dengue vaccine trials.  相似文献   

7.
In clinical trials of chronic diseases such as acquired immunodeficiency syndrome, cancer, or cardiovascular diseases, the concept of quality-adjusted lifetime (QAL) has received more and more attention. In this paper, we consider the problem of how the covariates affect the mean QAL when the data are subject to right censoring. We allow a very general form for the mean model as a function of covariates. Using the idea of inverse probability weighting, we first construct a simple weighted estimating equation for the parameters in our mean model. We then find the form of the most efficient estimating equation, which yields the most efficient estimator for the regression parameters. Since the most efficient estimator depends on the distribution of the health history processes, and thus cannot be estimated nonparametrically, we consider different approaches for improving the efficiency of the simple weighted estimating equation using observed data. The applicability of these methods is demonstrated by both simulation experiments and a data example from a breast cancer clinical trial study.  相似文献   

8.
Peterson DR  Zhao H  Eapen S 《Biometrics》2003,59(4):984-991
We consider the general problem of smoothing correlated data to estimate the nonparametric mean function when a random, but bounded, number of measurements is available for each independent subject. We propose a simple extension to the local polynomial regression smoother that retains the asymptotic properties of the working independence estimator, while typically reducing both the conditional bias and variance for practical sample sizes, as demonstrated by exact calculations for some particular models. We illustrate our method by smoothing longitudinal functional decline data for 100 patients with Huntington's disease. The class of local polynomial kernel-based estimating equations previously considered in the literature is shown to use the global correlation structure in an apparently detrimental way, which explains why some previous attempts to incorporate correlation were found to be asymptotically inferior to the working independence estimator.  相似文献   

9.
Heiligenberg (1987) recently proposed a model to explain how the representation of a stimulus variable through an ordered array of broadly tuned receptors could allow a degree of stimulus resolution greatly exceeding the resolution of the individual receptors which make up the array. In his model, this hyperacuity is achieved by connecting the receptors to a higher level pool interneuron according to a linear synaptic weighting function. We have extended this model to the general case of arbitrary polynomial synaptic weighting functions, and showed that the response function of this higher level interneuron is a polynomial of the same order as the weighting function. We also proved that Hermite polynomials are eigen-functions of the system. Further, by allowing multiple interneurons in the higher level pool, each of which is connected to the receptors according to a different orthogonal weighting function, we demonstrated that extended stimulus functions can be represented with enhanced precision, rather than just the value of individual point stimuli. Finally, we suggest a solution to the problem of edge effect errors arising near the ends of finite receptor arrays.  相似文献   

10.
J A Hanley  M N Parnes 《Biometrics》1983,39(1):129-139
This paper presents examples of situations in which one wishes to estimate a multivariate distribution from data that may be right-censored. A distinction is made between what we term 'homogeneous' and 'heterogeneous' censoring. It is shown how a multivariate empirical survivor function must be constructed in order to be considered a (nonparametric) maximum likelihood estimate of the underlying survivor function. A closed-form solution, similar to the product-limit estimate of Kaplan and Meier, is possible with homogeneous censoring, but an iterative method, such as the EM algorithm, is required with heterogeneous censoring. An example is given in which an anomaly is produced if censored multivariate data are analyzed as a series of univariate variables; this anomaly is shown to disappear if the methods of this paper are used.  相似文献   

11.
Yuan Y  Yin G 《Biometrics》2011,67(4):1543-1554
In the estimation of a dose-response curve, parametric models are straightforward and efficient but subject to model misspecifications; nonparametric methods are robust but less efficient. As a compromise, we propose a semiparametric approach that combines the advantages of parametric and nonparametric curve estimates. In a mixture form, our estimator takes a weighted average of the parametric and nonparametric curve estimates, in which a higher weight is assigned to the estimate with a better model fit. When the parametric model assumption holds, the semiparametric curve estimate converges to the parametric estimate and thus achieves high efficiency; when the parametric model is misspecified, the semiparametric estimate converges to the nonparametric estimate and remains consistent. We also consider an adaptive weighting scheme to allow the weight to vary according to the local fit of the models. We conduct extensive simulation studies to investigate the performance of the proposed methods and illustrate them with two real examples.  相似文献   

12.
1. The normalization of biochemical data to weight them appropriately for parameter estimation is considered, with reference particularly to data from tracer kinetics and enzyme kinetics. If the data are in replicate, it is recommended that the sum of squared deviations for each experimental variable at each time or concentration point is divided by the local variance at that point. 2. If there is only one observation for each variable at each sampling point, normalization may still be required if the observations cover more than one order of magnitude, but there is no absolute criterion for judging the effect of the weighting that is produced. The goodness of fit that is produced by minimizing the weighted sum of squares of deviations must be judged subjectively. It is suggested that the goodness of fit may be regarded as satisfactory if the data points are distributed uniformly on either side of the fitted curve. A chi-square test may be used to decide whether the distribution is abnormal. The proportion of the residual variance associated with points on one or other side of the fitted curve may also be taken into account, because this gives an indication of the sensitivity of the residual variance to movement of the curve away from particular data points. These criteria for judging the effect of weighting are only valid if the model equation may reasonably be expected to apply to all the data points. 3. On this basis, normalizing by dividing the deviation for each data point by the experimental observation or by the equivalent value calculated by the model equation may both be shown to produce a consistent bias for numerically small observations, the former biasing the curve towards the smallest observations, the latter tending to produce a curve that is above the numerically smaller data points. It was found that dividing each deviation by the mean of observed and calculated variable appropriate to it produces a weighting that is fairly free from bias as judged by the criteria mentioned above. This normalization factor was tested on published data from both tracer kinetics and enzyme kinetics.  相似文献   

13.
Schools of fish and flocks of birds are examples of self-organized animal groups that arise through social interactions among individuals. We numerically study two individual-based models, which recent empirical studies have suggested to explain self-organized group animal behavior: (i) a zone-based model where the group communication topology is determined by finite interacting zones of repulsion, attraction, and orientation among individuals; and (ii) a model where the communication topology is described by Delaunay triangulation, which is defined by each individual''s Voronoi neighbors. The models include a tunable parameter that controls an individual''s relative weighting of attraction and alignment. We perform computational experiments to investigate how effectively simulated groups transfer information in the form of velocity when an individual is perturbed. A cross-correlation function is used to measure the sensitivity of groups to sudden perturbations in the heading of individual members. The results show how relative weighting of attraction and alignment, location of the perturbed individual, population size, and the communication topology affect group structure and response to perturbation. We find that in the Delaunay-based model an individual who is perturbed is capable of triggering a cascade of responses, ultimately leading to the group changing direction. This phenomenon has been seen in self-organized animal groups in both experiments and nature.  相似文献   

14.
The map from genotype to phenotype is an exceedingly complex function of central importance in biology. In this work we derive and analyze a mathematically tractable model of the genotype-phenotype map that allows for any order of gene interaction. By assuming that the alterations of the effect of a gene substitution due to changes in the genetic background can be described as a linear transformation, we show that the genotype-phenotype map is a sum of linear and multilinear terms of operationally defined "reference" effects at each locus. The "multilinear" model is used to study the effect of epistasis on quantitative genetic variation, on the response to selection, and on genetic canalization. It is shown how the model can be used to estimate the strength of "functional" epistasis from a variety of genetic experiments.  相似文献   

15.
In this paper we consider a simple model of an environment in which prey are distributed in patches. It is assumed that each patch contains at most one item, but items may vary in the ease with which they can be found. The time spent in unsuccessful search on a patch gives information of whether a patch contains an item, and if it does, how difficult that item is to find. We show how this information can be used to find the policy which maximizes the mean rate of reward for the environment. The analysis is illustrated by two examples.  相似文献   

16.
17.
When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.  相似文献   

18.
We consider the problem of what is being optimized in human actions with respect to various aspects of human movements and different motor tasks. From the mathematical point of view this problem consists of finding an unknown objective function given the values at which it reaches its minimum. This problem is called the inverse optimization problem. Until now the main approach to this problems has been the cut-and-try method, which consists of introducing an objective function and checking how it reflects the experimental data. Using this approach, different objective functions have been proposed for the same motor action. In the current paper we focus on inverse optimization problems with additive objective functions and linear constraints. Such problems are typical in human movement science. The problem of muscle (or finger) force sharing is an example. For such problems we obtain sufficient conditions for uniqueness and propose a method for determining the objective functions. To illustrate our method we analyze the problem of force sharing among the fingers in a grasping task. We estimate the objective function from the experimental data and show that it can predict the force-sharing pattern for a vast range of external forces and torques applied to the grasped object. The resulting objective function is quadratic with essentially non-zero linear terms.  相似文献   

19.
20.
Stocks of commercial fish are often modelled using sampling data of various types, of unknown precision, and from various sources assumed independent. We want each set to contribute to estimates of the parameters in relation to its precision and goodness of fit with the model. Iterative re-weighting of the sets is proposed for linear models until the weight of each set is found to be proportional to (relative weighting) or equal to (absolute weighting) the set-specific residual invariances resulting from a generalised least squares fit. Formulae for the residual variances are put forward involving fractional allocation of degrees of freedom depending on the numbers of independent observations in each set, the numbers of sets contributing to the estimate of each parameter, and the number of weights estimated. To illustrate the procedure, numbers of the 1984 year-class of North Sea cod (a) landed commercially each year, and (b) caught per unit of trawling time by an annual groundfish survey are modelled as a function of age to estimate total mortality, Z, relative catching power of the two fishing methods, and relative precision of the two sets of observations as indices of stock abundance. It was found that the survey abundance indices displayed residual variance about 29 times higher than that of the annual landings.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号