首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Simple discrete-time estimators which allow the on-line estimation of the kinetic rates from the measurements of components' concentrations inside a bioreactor are proposed. In fact, the proposed estimators are obtained by a direct forward Euler discretization of continuous-time estimators. The design of the estimators in the continuous as well as in the discrete-time does not require or assume any model for the kinetic rates. One of the main characteristics of these estimators lies in the easiness of their calibration. We here emphasize on the performances of the discrete version of these estimators, whose stability and convergence are proved under the same conditions as in the continuous case with an additional mild assumption on the sampling time. Simulation and real-life experiments results corresponding to the discrete estimation are given. The accuracy of the obtained estimates as well as the easiness of the estimators' implementation do constitute reliable and powerful arguments for their use, in particular in adaptive control schemes.  相似文献   

2.
G. Asteris  S. Sarkar 《Genetics》1996,142(1):313-326
Bayesian procedures are developed for estimating mutation rates from fluctuation experiments. Three Bayesian point estimators are compared with four traditional ones using the results of 10,000 simulated experiments. The Bayesian estimators were found to be at least as efficient as the best of the previously known estimators. The best Bayesian estimator is one that uses (1/m(2)) as the prior probability density function and a quadratic loss function. The advantage of using these estimators is most pronounced when the number of fluctuation test tubes is small. Bayesian estimation allows the incorporation of prior knowledge about the estimated parameter, in which case the resulting estimators are the most efficient. It enables the straightforward construction of confidence intervals for the estimated parameter. The increase of efficiency with prior information and the narrowing of the confidence intervals with additional experimental results are investigated. The results of the simulations show that any potential inaccuracy of estimation arising from lumping together all cultures with more than n mutants (the jackpots) almost disappears at n = 70 (provided that the number of mutations in a culture is low). These methods are applied to a set of experimental data to illustrate their use.  相似文献   

3.
Estimating the mutation rate, or equivalently effective population size, is a common task in population genetics. If recombination is low or high, optimal linear estimation methods are known and well understood. For intermediate recombination rates, the calculation of optimal estimators is more challenging. As an alternative to model-based estimation, neural networks and other machine learning tools could help to develop good estimators in these involved scenarios. However, if no benchmark is available it is difficult to assess how well suited these tools are for different applications in population genetics.Here we investigate feedforward neural networks for the estimation of the mutation rate based on the site frequency spectrum and compare their performance with model-based estimators. For this we use the model-based estimators introduced by Fu, Futschik et al., and Watterson that minimize the variance or mean squared error for no and free recombination. We find that neural networks reproduce these estimators if provided with the appropriate features and training sets. Remarkably, using the model-based estimators to adjust the weights of the training data, only one hidden layer is necessary to obtain a single estimator that performs almost as well as model-based estimators for low and high recombination rates, and at the same time provides a superior estimation method for intermediate recombination rates. We apply the method to simulated data based on the human chromosome 2 recombination map, highlighting its robustness in a realistic setting where local recombination rates vary and/or are unknown.  相似文献   

4.
Multistate models can be successfully used for describing complex event history data, for example, describing stages in the disease progression of a patient. The so‐called “illness‐death” model plays a central role in the theory and practice of these models. Many time‐to‐event datasets from medical studies with multiple end points can be reduced to this generic structure. In these models one important goal is the modeling of transition rates but biomedical researchers are also interested in reporting interpretable results in a simple and summarized manner. These include estimates of predictive probabilities, such as the transition probabilities, occupation probabilities, cumulative incidence functions, and the sojourn time distributions. We will give a review of some of the available methods for estimating such quantities in the progressive illness‐death model conditionally (or not) on covariate measures. For some of these quantities estimators based on subsampling are employed. Subsampling, also referred to as landmarking, leads to small sample sizes and usually to heavily censored data leading to estimators with higher variability. To overcome this issue estimators based on a preliminary estimation (presmoothing) of the probability of censoring may be used. Among these, the presmoothed estimators for the cumulative incidences are new. We also introduce feasible estimation methods for the cumulative incidence function conditionally on covariate measures. The proposed methods are illustrated using real data. A comparative simulation study of several estimation approaches is performed and existing software in the form of R packages is discussed.  相似文献   

5.
Huang J  Harrington D 《Biometrics》2002,58(4):781-791
The Cox proportional hazards model is often used for estimating the association between covariates and a potentially censored failure time, and the corresponding partial likelihood estimators are used for the estimation and prediction of relative risk of failure. However, partial likelihood estimators are unstable and have large variance when collinearity exists among the explanatory variables or when the number of failures is not much greater than the number of covariates of interest. A penalized (log) partial likelihood is proposed to give more accurate relative risk estimators. We show that asymptotically there always exists a penalty parameter for the penalized partial likelihood that reduces mean squared estimation error for log relative risk, and we propose a resampling method to choose the penalty parameter. Simulations and an example show that the bootstrap-selected penalized partial likelihood estimators can, in some instances, have smaller bias than the partial likelihood estimators and have smaller mean squared estimation and prediction errors of log relative risk. These methods are illustrated with a data set in multiple myeloma from the Eastern Cooperative Oncology Group.  相似文献   

6.
Simple nonlinear observers for the on-line estimation of the specific growth rate from presently attainable real-time measurements are presented. The proposed observers do not assume or require any model for the specific growth rate and they are very successful in accurately estimating this parameter. Moreover, they are very easy to implement and to calibrate. Indeed, due to the particular structure of their gain, their tuning is reduced to the calibration of a single parameter. Simulation results obtained under different operating conditions are given in order to highlight the performances of the proposed estimators.  相似文献   

7.
We have developed a package program for the estimation of Michaelis-Menten parameters for enzymes that conform to different kinetic mechanisms. Data from different experimental schemes can be fitted with appropriate weighing factors to any of 6 mathematical models, corresponding to 5 kinetic mechanisms: ordered bi-bi, Theorell-Chance, rapid equilibrium random bi-bi, rapid equilibrium ordered bi-bi and ping pong bi-bi. The program also performs a significance test to discriminate between different candidate models. To illustrate the performance of the program, real data from kinetic experiments with glucose 6-phosphate from Leuconostoc mesenteroides have been fitted to different mathematical models, and the results are discussed. The program can be easily implemented for the fitting of kinetic data to any other model.  相似文献   

8.
It is not uncommon that we may encounter a randomized clinical trial (RCT) in which there are confounders which are needed to control and patients who do not comply with their assigned treatments. In this paper, we concentrate our attention on interval estimation of the proportion ratio (PR) of probabilities of response between two treatments in a stratified noncompliance RCT. We have developed and considered five asymptotic interval estimators for the PR, including the interval estimator using the weighted-least squares (WLS) estimator, the interval estimator using the Mantel-Haenszel type of weight, the interval estimator derived from Fieller's Theorem with the corresponding WLS optimal weight, the interval estimator derived from Fieller's Theorem with the randomization-based optimal weight, and the interval estimator based on a stratified two-sample proportion test with the optimal weight suggested elsewhere. To evaluate and compare the finite sample performance of these estimators, we apply Monte Carlo simulation to calculate the coverage probability and average length in a variety of situations. We discuss the limitation and usefulness for each of these interval estimators, as well as include a general guideline about which estimators may be used for given various situations.  相似文献   

9.
We are interested in the estimation of average treatment effects based on right-censored data of an observational study. We focus on causal inference of differences between t-year absolute event risks in a situation with competing risks. We derive doubly robust estimation equations and implement estimators for the nuisance parameters based on working regression models for the outcome, censoring, and treatment distribution conditional on auxiliary baseline covariates. We use the functional delta method to show that these estimators are regular asymptotically linear estimators and estimate their variances based on estimates of their influence functions. In empirical studies, we assess the robustness of the estimators and the coverage of confidence intervals. The methods are further illustrated using data from a Danish registry study.  相似文献   

10.
In population‐based case‐control studies, it is of great public‐health importance to estimate the disease incidence rates associated with different levels of risk factors. This estimation is complicated by the fact that in such studies the selection probabilities for the cases and controls are unequal. A further complication arises when the subjects who are selected into the study do not participate (i.e. become nonrespondents) and nonrespondents differ systematically from respondents. In this paper, we show how to account for unequal selection probabilities as well as differential nonresponses in the incidence estimation. We use two logistic models, one relating the disease incidence rate to the risk factors, and one modelling the predictors that affect the nonresponse probability. After estimating the regression parameters in the nonresponse model, we estimate the regression parameters in the disease incidence model by a weighted estimating function that weights a respondent's contribution to the likelihood score function by the inverse of the product of his/her selection probability and his/her model‐predicted response probability. The resulting estimators of the regression parameters and the corresponding estimators of the incidence rates are shown to be consistent and asymptotically normal with easily estimated variances. Simulation results demonstrate that the asymptotic approximations are adequate for practical use and that failure to adjust for nonresponses could result in severe biases. An illustration with data from a cardiovascular study that motivated this work is presented.  相似文献   

11.
Markov models of codon substitution are powerful inferential tools for studying biological processes such as natural selection and preferences in amino acid substitution. The equilibrium character distributions of these models are almost always estimated using nucleotide frequencies observed in a sequence alignment, primarily as a matter of historical convention. In this note, we demonstrate that a popular class of such estimators are biased, and that this bias has an adverse effect on goodness of fit and estimates of substitution rates. We propose a “corrected” empirical estimator that begins with observed nucleotide counts, but accounts for the nucleotide composition of stop codons. We show via simulation that the corrected estimates outperform the de facto standard estimates not just by providing better estimates of the frequencies themselves, but also by leading to improved estimation of other parameters in the evolutionary models. On a curated collection of sequence alignments, our estimators show a significant improvement in goodness of fit compared to the approach. Maximum likelihood estimation of the frequency parameters appears to be warranted in many cases, albeit at a greater computational cost. Our results demonstrate that there is little justification, either statistical or computational, for continued use of the -style estimators.  相似文献   

12.
Since Liang and Zeger (1986) proposed the ‘generalized estimating equations’ approach for the estimation of regression parameters in models with correlated discrete responses, a lot of work has been devoted to the investigation of the properties of the corresponding GEE estimators. However, the effects of different kinds of covariates have often been overlooked. In this paper it is shown that the use of non-singular block invariant matrices of covariates, as e.g. a design matrix in an analysis of variance model, leads to GEE estimators which are identical regardless of the ‘working’ correlation matrix used. Moreover, they are efficient (McCullagh, 1983). If on the other hand only covariates are used which are invariant within blocks, the efficiency gain in choosing the ‘correct’ vs. an ‘incorrect’ correlation structure is shown to be negligible. The results of a simple simulation study suggest that although different GEE estimators are not identical and are not as efficient as a ML estimator, the differences are still negligible if both types of invariant covariates are present.  相似文献   

13.
This paper explores the use of the rank set sampling (RSS) protocol as it pertains to the estimation of a population proportion. The maximum likelihood estimator (MLE) and the sample proportion, both based on the RSS data, are discussed and their corresponding asymptotic distributions are derived. Based on these results the MLE is found to be uniformly more efficient than the sample proportion. Nevertheless, both estimators are more efficient than the simple random sample proportion. The greatest gains in efficiency are obtained at the center of the parameter space. Finally, these results remain valid in the presence of judgment error. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
When the sample size is not large or when the underlying disease is rare, to assure collection of an appropriate number of cases and to control the relative error of estimation, one may employ inverse sampling, in which one continues sampling subjects until one obtains exactly the desired number of cases. This paper focuses discussion on interval estimation of the simple difference between two proportions under independent inverse sampling. This paper develops three asymptotic interval estimators on the basis of the maximum likelihood estimator (MLE), the uniformly minimum variance unbiased estimator (UMVUE), and the asymptotic likelihood ratio test (ALRT). To compare the performance of these three estimators, this paper calculates the coverage probability and the expected length of the resulting confidence intervals on the basis of the exact distribution. This paper finds that when the underlying proportions of cases in both two comparison populations are small or moderate (≤0.20), all three asymptotic interval estimators developed here perform reasonably well even for the pre-determined number of cases as small as 5. When the pre-determined number of cases is moderate or large (≥50), all three estimators are essentially equivalent in all the situations considered here. Because application of the two interval estimators derived from the MLE and the UMVUE does not involve any numerical iterative procedure needed in the ALRT, for simplicity we may use these two estimators without losing efficiency.  相似文献   

15.
FRYDMAN  HALINA 《Biometrika》1995,82(4):773-789
The nonparametric estimation of the cumulative transition intensityfunctions in a threestate time-nonhomogeneous Markov processwith irreversible transitions, an ‘illness-death’model, is considered when times of the intermediate transition,e.g. onset of a disease, are interval-censored. The times of‘death’ are assumed to be known exactly or to beright-censored. In addition the observed process may be left-truncated.Data of this type arise when the process is sampled periodically.For example, when the patients are monitored through periodicexaminations the observations on times of change in their diseasestatus will be interval-censored. Under the sampling schemeconsidered here the Nelson–Aalen estimator (Aalen, 1978)for a cumulative transition intensity is not applicable. Inthe proposed method the maximum likelihood estimators of someof the transition intensities are derived from the estimatorsof the corresponding subdistribution functions. The maximumlikelihood estimators are shown to have a self-consistency property.The self-consistency algorithm is developed for the computationof the estimators. This approach generalises the results fromTurnbull (1976) and Frydman (1992). The methods are illustratedwith diabetes survival data.  相似文献   

16.
The molecular machinery of life relies on complex multistep processes that involve numerous individual transitions, such as molecular association and dissociation steps, chemical reactions, and mechanical movements. The corresponding transition rates can be typically measured in vitro but not in vivo. Here, we develop a general method to deduce the in-vivo rates from their in-vitro values. The method has two basic components. First, we introduce the kinetic distance, a new concept by which we can quantitatively compare the kinetics of a multistep process in different environments. The kinetic distance depends logarithmically on the transition rates and can be interpreted in terms of the underlying free energy barriers. Second, we minimize the kinetic distance between the in-vitro and the in-vivo process, imposing the constraint that the deduced rates reproduce a known global property such as the overall in-vivo speed. In order to demonstrate the predictive power of our method, we apply it to protein synthesis by ribosomes, a key process of gene expression. We describe the latter process by a codon-specific Markov model with three reaction pathways, corresponding to the initial binding of cognate, near-cognate, and non-cognate tRNA, for which we determine all individual transition rates in vitro. We then predict the in-vivo rates by the constrained minimization procedure and validate these rates by three independent sets of in-vivo data, obtained for codon-dependent translation speeds, codon-specific translation dynamics, and missense error frequencies. In all cases, we find good agreement between theory and experiment without adjusting any fit parameter. The deduced in-vivo rates lead to smaller error frequencies than the known in-vitro rates, primarily by an improved initial selection of tRNA. The method introduced here is relatively simple from a computational point of view and can be applied to any biomolecular process, for which we have detailed information about the in-vitro kinetics.  相似文献   

17.
FST and kinship are key parameters often estimated in modern population genetics studies in order to quantitatively characterize structure and relatedness. Kinship matrices have also become a fundamental quantity used in genome-wide association studies and heritability estimation. The most frequently-used estimators of FST and kinship are method-of-moments estimators whose accuracies depend strongly on the existence of simple underlying forms of structure, such as the independent subpopulations model of non-overlapping, independently evolving subpopulations. However, modern data sets have revealed that these simple models of structure likely do not hold in many populations, including humans. In this work, we analyze the behavior of these estimators in the presence of arbitrarily-complex population structures, which results in an improved estimation framework specifically designed for arbitrary population structures. After generalizing the definition of FST to arbitrary population structures and establishing a framework for assessing bias and consistency of genome-wide estimators, we calculate the accuracy of existing FST and kinship estimators under arbitrary population structures, characterizing biases and estimation challenges unobserved under their originally-assumed models of structure. We then present our new approach, which consistently estimates kinship and FST when the minimum kinship value in the dataset is estimated consistently. We illustrate our results using simulated genotypes from an admixture model, constructing a one-dimensional geographic scenario that departs nontrivially from the independent subpopulations model. Our simulations reveal the potential for severe biases in estimates of existing approaches that are overcome by our new framework. This work may significantly improve future analyses that rely on accurate kinship and FST estimates.  相似文献   

18.
J O'Quigley 《Biometrics》1992,48(3):853-862
The problem of point and interval estimation following a Phase I trial, carried out according to the scheme outlined by O'Quigley, Pepe, and Fisher (1990, Biometrics 46, 33-48), is investigated. A reparametrization of the model suggested in this earlier work can be seen to be advantageous in some circumstances. Maximum likelihood estimators, Bayesian estimators, and one-step estimators are considered. The continual reassessment method imposes restrictions on the sample space such that it is not possible for confidence intervals to achieve exact coverage properties, however large a sample is taken. Nonetheless, our simulations, based on a small finite sample of 20, not atypical in studies of this type, indicate that the calculated intervals are useful in most practical cases and achieve coverage very close to nominal levels in a very wide range of situations. The relative merits of the different estimators and their associated confidence intervals, viewed from a frequentist perspective, are discussed.  相似文献   

19.
K H Pollock  M C Otto 《Biometrics》1983,39(4):1035-1049
In this paper the problem of finding robust estimators of population size in closed K-sample capture-recapture experiments is considered. Particular attention is paid to models where heterogeneity of capture probabilities is allowed. First, a general estimation procedure is given which does not depend on any assumptions about the form of the distribution of capture probabilities. This is followed by a detailed discussion of the usefulness of the generalized jackknife technique to reduce bias. Numerical comparisons of the bias and variance of various estimators are given. Finally, a general discussion is given with several recommendations on estimators to be used in practice.  相似文献   

20.
Analysis of linkage disequilibrium in an island model   总被引:1,自引:0,他引:1  
Linkage disequilibria for two loci in a finite island model were parameterized. The total linkage disequilibrium was decomposed into three components, gametic, demic, and population, for which corresponding unbiased estimators were established. Other statistics encountered provided measures of differentiation corresponding to the hierarchical structure of the ecological model. Under the assumption of linkage equilibrium, the variances and covariances of these estimators and statistics were formulated in terms of descent measures, functions of gene frequencies, and the numbers of individuals, demes, and populations sampled. The functions of gene frequencies fall into two classes, one representing the differentiation of genes at each locus, and the other representing the association of genes between the loci. For a neutral model with extinction, migration, and linkage, transition equations were derived for the descent measures which also take into account deme size and numbers of dems within the population. With the addition of unequal mutation rates for a finite number of alleles at each locus, the transition equations were solved for the descent measures in the equilibrium state. This permitted the exact numerical evaluation of the effects of the sampling and ecological dimensions and of extinction, migration, and mutation rates in any parameter range. Some numerical results were presented for the effects of linkage, extinction, migration, and sampling on the variances of various measures of linkage disequilibrium and genetic differentiation. Also, some results were compared with the approximate numerical results of Ohta which agreed fairly well in the parameter ranges she considered, but not so well in other ranges.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号