首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The parameter estimation problem for dynamic system models designed for diagnostic use is often a difficult one, due to severe practical constraints imposed on the clinical procedure. This is particularly true in many endocrine and metabolic system studies, where blood sampling provides the basic data, and the number of samples or the observation interval must be minimized. We have applied some recent developments in optimal sampling schedule design, in two case studies for two different liver function tests, to determine minimum size optimal schedules for these tests. We have assessed the effects of these schedules on parameter estimation accuracy and reliability, compared with nonoptimal schedules 2–5 times larger in number, using a Monte Carlo simulation approach. The results are encouraging, as they indicate this approach to be a practical and efficient alternative to more conventional ones.  相似文献   

2.
A comparative analysis of thermal blood perfusion measurement techniques   总被引:1,自引:0,他引:1  
The object of this study was to devise a unified method for comparing different thermal techniques for the estimation of blood perfusion rates and to perform a comparison for several common techniques. The approach used was to develop analytical models for the temperature response for all combinations of five power deposition geometries (spherical, one- and two-dimensional cylindrical, and one- and two-dimensional Gaussian) and three transient heating techniques (temperature pulse-decay, temperature step function, and constant-power heat-up) plus one steady-state heating technique. The transient models were used to determine the range of times (the time window) when a significant portion of the transient temperature response was due to blood perfusion. This time window was defined to begin when the difference between the conduction-only and the conduction-plus-blood flow transient temperature (or power) responses exceeded a specified value, and to end when the conduction-plus-blood flow transient temperature (or power) reached a specified fraction of its steady-state value. The results are summarized in dimensionless plots showing the size of the time windows for each of the transient perfusion estimation techniques. Several conclusions were drawn, in particular: (a) low perfusions are difficult to estimate because of the dominance of conduction, (b) large heated regions are better suited for estimation of low perfusions, (c) noninvasive heating techniques are superior because they have the potential to minimize conduction effects, and (d) none of the transient techniques appears to be clearly superior to the others.  相似文献   

3.
When we apply ecological models in environmental management, we must assess the accuracy of parameter estimation and its impact on model predictions. Parameters estimated by conventional techniques tend to be nonrobust and require excessive computational resources. However, optimization algorithms are highly robust and generally exhibit convergence of parameter estimation by inversion with nonlinear models. They can simultaneously generate a large number of parameter estimates using an entire data set. In this study, we tested four inversion algorithms (simulated annealing, shuffled complex evolution, particle swarm optimization, and the genetic algorithm) to optimize parameters in photosynthetic models depending on different temperatures. We investigated if parameter boundary values and control variables influenced the accuracy and efficiency of the various algorithms and models. We obtained optimal solutions with all of the inversion algorithms tested if the parameter bounds and control variables were constrained properly. However, the efficiency of processing time use varied with the control variables obtained. In addition, we investigated if temperature dependence formalization impacted optimally the parameter estimation process. We found that the model with a peaked temperature response provided the best fit to the data.  相似文献   

4.
Analytical solutions were developed based on the Green's function method to describe heat transfer in tissue including the effects of blood perfusion. These one-dimensional transient solutions were used with a simple parameter estimation technique and experimental measurements of temperature and heat flux at the surface of simulated tissue. It was demonstrated how such surface measurements can be used during step changes in the surface thermal conditions to estimate the value of three important parameters: blood perfusion (w(b)), thermal contact resistance (R"), and core temperature of the tissue (T(core)). The new models were tested against finite-difference solutions of thermal events on the surface to show the validity of the analytical solution. Simulated data was used to demonstrate the response of the model in predicting optimal parameters from noisy temperature and heat flux measurements. Finally, the analytical model and simple parameter estimation routine were used with actual experimental data from perfusion in phantom tissue. The model was shown to provide a very good match with the data curves. This demonstrated the first time that all three of these important parameters (w(b), R", and T(core)) have simultaneously been estimated from a single set of thermal measurements at the surface of tissue.  相似文献   

5.
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.  相似文献   

6.
We examine the effects of changing plot size on parameter estimation efficiency in multivariate (community-level) ecological studies, where estimation efficiency is defined in terms relating to the statistical precision of estimates of all variables (e.g. species) in a data set. Three ‘efficiency criteria’ for multivariate estimation are developed, and the relationship between estimation efficiency and plot size examined using three field data sets (deciduous understory, coniferous understory, and mire vegetation) from central Canada. For all three communities, estimation efficiency was found to increase monotonically with increasing plot size. However, relative gains in efficiency at larger plot sizes were offset by substantial increases in sampling effort (enumeration time per plot). Our results indicate that the largest plot size possible, given the constraints of time, should be used for parameter estimation in plant communities. Also, plots that are larger than the mean patch size should be utilized when sampling heterogeneous vegetation.  相似文献   

7.
类群取样与系统发育分析精确度之探索   总被引:6,自引:2,他引:4  
Appropriate and extensive taxon sampling is one of the most important determinants of accurate phylogenetic estimation. In addition, accuracy of inferences about evolutionary processes obtained from phylogenetic analyses is improved significantly by thorough taxon sampling efforts. Many recent efforts to improve phylogenetic estimates have focused instead on increasing sequence length or the number of overall characters in the analysis, and this often does have a beneficial effect on the accuracy of phylogenetic analyses. However, phylogenetic analyses of few taxa (but each represented by many characters) can be subject to strong systematic biases, which in turn produce high measures of repeatability (such as bootstrap proportions) in support of incorrect or misleading phylogenetic results. Thus, it is important for phylogeneticists to consider both the sampling of taxa, as well as the sampling of characters, in designing phylogenetic studies. Taxon sampling also improves estimates of evolutionary parameters derived from phylogenetic trees, and is thus important for improved applications of phylogenetic analyses. Analysis of sensitivity to taxon inclusion, the possible effects of long-branch attraction, and sensitivity of parameter estimation for model-based methods should be a part of any careful and thorough phylogenetic analysis. Furthermore, recent improvements in phylogenetic algorithms and in computational power have removed many constraints on analyzing large, thoroughly sampled data sets. Thorough taxon sampling is thus one of the most practical ways to improve the accuracy of phylogenetic estimates, as well as the accuracy of biological inferences that are based on these phylogenetic trees.  相似文献   

8.
The estimation of population differentiation with microsatellite markers   总被引:54,自引:0,他引:54  
Microsatellite markers are routinely used to investigate the genetic structuring of natural populations. The knowledge of how genetic variation is partitioned among populations may have important implications not only in evolutionary biology and ecology, but also in conservation biology. Hence, reliable estimates of population differentiation are crucial to understand the connectivity among populations and represent important tools to develop conservation strategies. The estimation of differentiation is c from Wright's FST and/or Slatkin's RST, an FST -analogue assuming a stepwise mutation model. Both these statistics have their drawbacks. Furthermore, there is no clear consensus over their relative accuracy. In this review, we first discuss the consequences of different temporal and spatial sampling strategies on differentiation estimation. Then, we move to statistical problems directly associated with the estimation of population structuring itself, with particular emphasis on the effects of high mutation rates and mutation patterns of microsatellite loci. Finally, we discuss the biological interpretation of population structuring estimates.  相似文献   

9.
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy).This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable.  相似文献   

10.
The so-called minimal model (MM) of glucose kinetics is widely employed to estimate insulin sensitivity (S(I)) both in clinical and epidemiological studies. Usually, MM is numerically identified by resorting to Fisherian parameter estimation techniques, such as maximum likelihood (ML). However, unsatisfactory parameter estimates are sometimes obtained, e.g. S(I) estimates virtually zero or unrealistically high and affected by very large uncertainty, making the practical use of MM difficult. The first result of this paper concerns the mathematical demonstration that these estimation difficulties are inherent to MM structure which can expose S(I) estimation to the risk of numerical non-identifiability. The second result is based on simulation studies and shows that Bayesian parameter estimation techniques are less sensitive, in terms of both accuracy and precision, than the Fisherian ones with respect to these difficulties. In conclusion, Bayesian parameter estimation can successfully deal with difficulties of MM identification inherently due to its structure.  相似文献   

11.
Haplotype analyses have become increasingly common in genetic studies of human disease because of their ability to identify unique chromosomal segments likely to harbor disease-predisposing genes. The study of haplotypes is also used to investigate many population processes, such as migration and immigration rates, linkage-disequilibrium strength, and the relatedness of populations. Unfortunately, many haplotype-analysis methods require phase information that can be difficult to obtain from samples of nonhaploid species. There are, however, strategies for estimating haplotype frequencies from unphased diploid genotype data collected on a sample of individuals that make use of the expectation-maximization (EM) algorithm to overcome the missing phase information. The accuracy of such strategies, compared with other phase-determination methods, must be assessed before their use can be advocated. In this study, we consider and explore sources of error between EM-derived haplotype frequency estimates and their population parameters, noting that much of this error is due to sampling error, which is inherent in all studies, even when phase can be determined. In light of this, we focus on the additional error between haplotype frequencies within a sample data set and EM-derived haplotype frequency estimates incurred by the estimation procedure. We assess the accuracy of haplotype frequency estimation as a function of a number of factors, including sample size, number of loci studied, allele frequencies, and locus-specific allelic departures from Hardy-Weinberg and linkage equilibrium. We point out the relative impacts of sampling error and estimation error, calling attention to the pronounced accuracy of EM estimates once sampling error has been accounted for. We also suggest that many factors that may influence accuracy can be assessed empirically within a data set-a fact that can be used to create "diagnostics" that a user can turn to for assessing potential inaccuracies in estimation.  相似文献   

12.
Patrick C. Tobin 《Ecography》2004,27(6):767-775
The estimation of spatial autocorrelation in spatially- and temporally-referenced data is fundamental to understanding an organism's population biology. I used four sets of census field data, and developed an idealized space-time dynamic system, to study the behavior of spatial autocorrelation estimates when a practical method of sampling is employed. Estimates were made using both a classical geostatistical approach and a recently developed non-parametric approach. In field data, the estimate of the local spatial autocorrelation (i.e. autocorrelation as the distance between pairs of sampling points approaches 0), was greatly affected by sample size, while the range of spatial dependence (i.e. the distance at which the autocorrelation becomes negligible) was fairly stable. Similar patterns were seen in the theoretical system, as well as greater variability in local spatial autocorrelation during the invasion stage of colonization. When sampling for the purposes of quantifying spatial patterns, improved estimates of spatial autocorrelation may be obtained by increasing the number of pairs of points that are close in space at the expense of attempting to cover the entire region of interest with equidistant sampling points. Also, results from the theoretical space-time system suggested that greater resolution in sampling may be required in newly establishing populations relative to those already established.  相似文献   

13.
Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important implications in understanding and predicting the effects of temperature on the dynamics of populations.  相似文献   

14.
高猛 《生态学报》2016,36(14):4406-4414
最近邻体法是一类有效的植物空间分布格局分析方法,邻体距离的概率分布模型用于描述邻体距离的统计特征,属于常用的最近邻体法之一。然而,聚集分布格局中邻体距离(个体到个体)的概率分布模型表达式复杂,参数估计的计算量大。根据该模型期望和方差的特性,提出了一种简化的参数估计方法,并利用遗传算法来实现参数优化,结果表明遗传算法可以有效地估计的该模型的两个参数。同时,利用该模型拟合了加拿大南温哥华岛3个寒温带树种的空间分布数据,结果显示:该概率分布模型可以很好地拟合美国花旗松(P.menziesii)和西部铁杉(T.heterophylla)的邻体距离分布,但由于西北红柏(T.plicata)存在高度聚集的团簇分布,拟合结果不理想;美国花旗松在样地中近似随机分布,空间聚集参数对空间尺度的依赖性不强,但西北红柏和西部铁杉空间聚集参数具有尺度依赖性,随邻体距离阶数增加而变大。最后,讨论了该模型以及参数估计方法的优势和限制。  相似文献   

15.
Vasco DA 《Genetics》2008,179(2):951-963
The estimation of ancestral and current effective population sizes in expanding populations is a fundamental problem in population genetics. Recently it has become possible to scan entire genomes of several individuals within a population. These genomic data sets can be used to estimate basic population parameters such as the effective population size and population growth rate. Full-data-likelihood methods potentially offer a powerful statistical framework for inferring population genetic parameters. However, for large data sets, computationally intensive methods based upon full-likelihood estimates may encounter difficulties. First, the computational method may be prohibitively slow or difficult to implement for large data. Second, estimation bias may markedly affect the accuracy and reliability of parameter estimates, as suggested from past work on coalescent methods. To address these problems, a fast and computationally efficient least-squares method for estimating population parameters from genomic data is presented here. Instead of modeling genomic data using a full likelihood, this new approach uses an analogous function, in which the full data are replaced with a vector of summary statistics. Furthermore, these least-squares estimators may show significantly less estimation bias for growth rate and genetic diversity than a corresponding maximum-likelihood estimator for the same coalescent process. The least-squares statistics also scale up to genome-sized data sets with many nucleotides and loci. These results demonstrate that least-squares statistics will likely prove useful for nonlinear parameter estimation when the underlying population genomic processes have complex evolutionary dynamics involving interactions between mutation, selection, demography, and recombination.  相似文献   

16.
Synopsis A literature review showed that numerous studies have dealt with the estimation of fish daily ration in the field. Comparisons of results from different studies are often difficult due to the use of different approaches and methods for parameter estimations. The objective of the present study was to compare the most commonly used approaches to estimate fish daily ration and to propose a standardized procedure for their estimation in the field. Comparisons were based on a field experiment specifically designed to investigate these questions and on data and theoretical considerations found in the literature. The results showed that (1) the gut fullness computed with entire digestive tract content is preferable to the stomach content only, supporting recent research done on other fish species; (2) it is important to consider the data distribution before estimating parameters; (3) estimates of experimental evacuation rates should be used rather than maximum evacuation rate for species showing no feeding periodicity; (4) it is necessary to exclude parasites from gut content in the computation of daily ration as they may significantly decrease daily ration estimates (by an average of 29.3% in this study); and (5) the Eggers (1977) model is as appropriate as, and less complex than, the Elliott & Persson (1978) model for estimating fish daily ration in the field, again supporting recent experiments done on other fish species.  相似文献   

17.
The inverse problem in electrocardiography is studied analytically using a concentric spheres model with no symmetry assumptions on the potential distribution. The mathematical formulation is presented, and existence and uniqueness of the solution are briefly discussed. Solution to the inverse problem is inherently very unstable. The magnitude of this instability is demonstrated using the derived analytical inverse solution for the spherical model. Regularization methods used to date are based on a regularization parameter that does not relate to any measurable physiological parameters. This paper presents a regularization method that is based on a parameter in the form of an a priori bound on the L2 norm of the inverse solution. Such a bound can be obtained from the theoretical estimates based on the measured values of the body surface potentials together with experimental knowledge about the magnitudes of the epicardial potentials. Based on the presented regularization, an exact form of the regularized solution and estimates of its accuracy are derived.  相似文献   

18.
In vitro data from a realistic-geometry electrolytic tank were used to demonstrate the consequences of computational issues critical to the ill-posed inverse problem in electrocardiography. The boundary element method was used to discretize the relationship between the body surface potentials and epicardial cage potentials. Variants of Tikhonov regularization were used to stabilize the inversion of the body surface potentials in order to reconstruct the epicardial surface potentials. The computational issues investigated were (1) computation of the regularization parameter; (2) effects of inaccuracy in locating the position of the heart; and (3) incorporation of a priori information on the properties of epicardial potentials into the regularization methodology. Two methods were suggested by which a priori information could be incorporated into the regularization formulation: (1) use of an estimate of the epicardial potential distribution everywhere on the surface and (2) use of regional bounds on the excursion of the potential. Results indicate that the a posteriori technique called CRESO, developed by Colli Franzone and coworkers, most consistently derives the regularization parameter closest to the optimal parameter for this experimental situation. The sensitivity of the inverse computation in a realistic-geometry torso to inaccuracies in estimating heart position are consistent with results from the eccentric spheres model; errors of 1 cm are well tolerated, but errors of 2 cm or greater result in a loss of position and amplitude information. Finally, estimates and bounds based on accurate, known information successfully lower the relative error associated with the inverse and have the potential to significantly enhance the amplitude and feature position information obtainable from the inverse-reconstructed epicardial potential map.  相似文献   

19.
It is often difficult to determine optimal sampling design for non-invasive genetic sampling, especially when dealing with rare or elusive species depleted of genetic diversity. To address this problem, we ran a hair-snag pilot study on the remnant Apennine brown bear population. We used occupancy models to estimate the performance of an improved field protocol, a meta-analysis approach to indirectly model capture probability, and simulations to evaluate the effect of genotyping errors on the accuracy of capture-recapture population estimates. In spring 2007 we collected 70 bear hair samples in 15 5 × 5 km cells, using 5 10-day trapping sessions. Bear detectability was higher in 2007 than in a previous attempt on the same population in 2004, reflecting improved field protocols and sampling design. However, individual capture probability was 0.136 (95% CI = 0.120–0.152), still below the minimum requirements of capture-mark-recapture closed population models. We genotyped hair samples (n = 63) at 9 microsatellite loci, obtaining 94% Polymerase Chain Reaction success, and 13 bear genotypes. Estimated PIDsib was 0.00594, and per-genotype error rate was 0.13, corresponding to a 99% probability of correct individual identification. Simulation studies showed that the effect of non-corrected or filtered genetic errors on the accuracy of population estimates was negligible only when individual capture probability was >0.2. Our results underline how the interaction among field protocols, sampling strategies and genotyping errors may affect the accuracy of DNA-based estimates of small and genetically depleted populations, and warned us about the feasibility of a survey using only traditional hair-snag sampling. In this and similar cases, indications from pilot studies can provide cost-effective means to evaluate the efficiency of designed sampling and modelling procedures.  相似文献   

20.
We employ an optimal solution to both the shape from motion problem and the related problem of the estimation of self-movement on a purely optical basis to deduce practical rules of thumb for the limits of the optic flow information content in the presence of perturbation of the motion parallax field. The results are illustrated and verified by means of a computer simulation.The results allow estimates of the accuracy of depth and egomotion estimates as a function of the accuracy of data sampling and the width of field of view, as well as estimates of the interaction between rotational and translational components of the movement.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号