首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
The aim of dose finding studies is sometimes to estimate parameters in a fitted model. The precision of the parameter estimates should be as high as possible. This can be obtained by increasing the number of subjects in the study, N, choosing a good and efficient estimation approach, and by designing the dose finding study in an optimal way. Increasing the number of subjects is not always feasible because of increasing cost, time limitations, etc. In this paper, we assume fixed N and consider estimation approaches and study designs for multiresponse dose finding studies. We work with diabetes dose–response data and compare a system estimation approach that fits a multiresponse Emax model to the data to equation‐by‐equation estimation that fits uniresponse Emax models to the data. We then derive some optimal designs for estimating the parameters in the multi‐ and uniresponse Emax model and study the efficiency of these designs.  相似文献   

2.
Summary Models of optimal carbon allocation schedules have influenced the way plant ecologists think about life history evolution, particularly for annual plants. The present study asks (1) how, within the framework of these models, are their predictions affected by within-season variation in mortality and carbon assimilation rates?; and (2) what are the consequences of these prediction changes for empirical tests of the models? A companion paper examines the basic assumptions of the models themselves. I conducted a series of numerical experiments with a simple carbon allocation model. Results suggest that both qualitative and quantitative predictions can sometimes be sensitive to parameter values for net assimilation rate and mortality: for some parameter values, both the time and size at onset of reproduction, as well as the number of reproductive intervals, vary considerably as a result of small variations in these parameters. For other parameter values, small variations in the parameters result in only small changes in predicted phenotype, but these have very large fitness consequences. Satisfactory empirical tests are thus likely to require much accuracy in parameter estimates. The effort required for parameter estimation imposes a practical constraint on empirical tests, making large multipopulation comparisons impractical. It may be most practical to compare the predicted and observed fitness consequences of variation in the timing of onset of reproduction.  相似文献   

3.
Optimal experimental designs were evaluated for the precise estimation of parameters of the Hill model. The optimally effective designs were obtained by using the criterion of D-optimization. For the Hill model, optimal designs replicate 3 sampling points. These points were shown to be quite sensitive to the behavior of the experimental error. Since an investigator is often uncertain about error conditions in biological studies, a practical approach would use the sampling scheme calculated for an intermediate error condition. Thus, if the behavior of error variances is not known, precise parameters of the Hill model are obtained by choosing concentrations which yield fractional responses (responses divided by their asymptotic, maximum value) of 0.086, 0.581 and 1.0. When experimental constraints limit the maximum attainable concentration and response, all design points are lowered. Appropriate designs can be constructed based on the design which is optimal when constraints result in a maximum attainable fractional response of 0.5. The optimal designs were found to be robust when the parameter values assumed by the investigator did not equal their true values. The estimating efficiencies obtained by using two frequently applied designs were assessed. Uniformly spaced concentrations yielded imprecise parameters. Six-point, geometrically spaced designs gave generally good results. However, their estimating efficiency was generally exceeded by the recommended sampling schemes even in the presence of uncertainty about error conditions. The method exemplified in this paper can be used for other models.  相似文献   

4.
The fast Fourier transformation has been the gold standard for transforming data from time to frequency domain in many spectroscopic methods, including NMR. While reliable, it has as a drawback that it requires a grid of uniformly sampled data points. This needs very long measuring times for sampling in multidimensional experiments in all indirect dimensions uniformly and even does not allow reaching optimal evolution times that would match the resolution power of modern high-field instruments. Thus, many alternative sampling and transformation schemes have been proposed. Their common challenges are the suppression of the artifacts due to the non-uniformity of the sampling schedules, the preservation of the relative signal amplitudes, and the computing time needed for spectra reconstruction. Here we present a fast implementation of the Iterative Soft Thresholding approach (istHMS) that can reconstruct high-resolution non-uniformly sampled NMR data up to four dimensions within a few hours and make routine reconstruction of high-resolution NUS 3D and 4D spectra convenient. We include a graphical user interface for generating sampling schedules with the Poisson-Gap method and an estimation of optimal evolution times based on molecular properties. The performance of the approach is demonstrated with the reconstruction of non-uniformly sampled medium and high-resolution 3D and 4D protein spectra acquired with sampling densities as low as 0.8%. The method presented here facilitates acquisition, reconstruction and use of multidimensional NMR spectra at otherwise unreachable spectral resolution in indirect dimensions.  相似文献   

5.
The traditional variance components approach for quantitative trait locus (QTL) linkage analysis is sensitive to violations of normality and fails for selected sampling schemes. Recently, a number of new methods have been developed for QTL mapping in humans. Most of the new methods are based on score statistics or regression-based statistics and are expected to be relatively robust to non-normality of the trait distribution and also to selected sampling, at least in terms of type I error. Whereas the theoretical development of these statistics is more or less complete, some practical issues concerning their implementation still need to be addressed. Here we study some of these issues such as the choice of denominator variance estimates, weighting of pedigrees, effect of parameter misspecification, effect of non-normality of the trait distribution, and effect of incorporating dominance. We present a comprehensive discussion of the theoretical properties of various denominator variance estimates and of the weighting issue and then perform simulation studies for nuclear families to compare the methods in terms of power and robustness. Based on our analytical and simulation results, we provide general guidelines regarding the choice of appropriate QTL mapping statistics in practical situations.  相似文献   

6.
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.  相似文献   

7.
In a recent paper, I presented a sampling formula for species abundances from multiple samples according to the prevailing neutral model of biodiversity, but practical implementation for parameter estimation was only possible when these samples were from local communities that were assumed to be equally dispersal limited. Here I show how the same sampling formula can also be used to estimate model parameters using maximum likelihood when the samples have different degrees of dispersal limitation. Moreover, it performs better than other, approximate, parameter estimation approaches. I also show how to calculate errors in the parameter estimates, which has so far been largely ignored in the development of and debate on neutral theory.  相似文献   

8.
To obtain accurate estimates of activity budget parameters, samples must be unbiased and precise. Many researchers have considered how biased data may affect their ability to draw conclusions and examined ways to decrease bias in sampling efforts, but few have addressed the implications of not considering estimate precision. We propose a method to assess whether the number of instantaneous samples collected is sufficient to obtain precise activity budget parameter estimates. We draw on sampling theory to determine the number of observations per animal required to reach a desired bound on the error of estimation based on a stratified random sample, with individual animals acting as strata. We also discuss the optimal balance between the number of individuals sampled and the number of observations sampled per individual for a variety of sampling conditions. We present an empirical dataset on pronghorn (Antilocapra americana) as an example of the utility of the method. The required numbers of observation to reach precise estimates for pronghorn varied between common and rare behaviors, but precise estimates were achieved with <255 observations per individual for common behaviors. The two most apparent factors affecting the required number of observations for precise estimates were the number of individuals sampled and the complexity of the activity budget. This technique takes into account variation associated with individual activity budgets and population variation in activity budget parameter estimates, and helps to ensure that estimates are precise. The method can also be used for planning future sampling efforts.  相似文献   

9.
The optimal schedules for breast cancer screening in terms of examination frequency and ages at examination are of practical interest. A decision-theoretic approach is explored to search for optimal cancer screening programs which should achieve maximum survival benefit while balancing the associated cost to the health care system. We propose a class of utility functions that account for costs associated with screening examinations and value of survival benefit under a non-stable disease model. We consider two different optimization criteria: optimize the number of screening examinations with equal screening intervals between exams but without a prefixed total cost; and optimize the ages at which screening should be given for a fixed total cost. We show that an optimal solution exists under each of the two frameworks. The proposed methods may consider women at different levels of risk for breast cancer so that the optimal screening strategies will be tailored according to a woman’s risk of developing the disease. Results of a numerical study are presented and the proposed models are illustrated with various data inputs. We also use the data inputs from the Health Insurance Plan of New York (HIP) and Canadian National Breast Screening Study (CNBSS) to illustrate the proposed models and to compare the utility values between the optimal schedules and the actual schedules in the HIP and CNBSS trials. Here, the utility is defined as the difference in cure rates between cases found at screening examinations and cases found between screening examinations while accounting for the cost of examinations, under a given screening schedule.  相似文献   

10.
Strategies for genetic mapping of categorical traits   总被引:3,自引:0,他引:3  
Shaoqi Rao  Xia Li 《Genetica》2000,109(3):183-197
The search for efficient and powerful statistical methods and optimal mapping strategies for categorical traits under various experimental designs continues to be one of the main tasks in genetic mapping studies. Methodologies for genetic mapping of categorical traits can generally be classified into two groups, linear and non-linear models. We develop a method based on a threshold model, termed mixture threshold model to handle ordinal (or binary) data from multiple families. Monte Carlo simulations are done to compare its statistical efficiencies and properties of the proposed non-linear model with a linear model for genetic mapping of categorical traits using multiple families. The mixture threshold model has notably higher statistical power than linear models. There may be an optimal sampling strategy (family size vs number of families) in which genetic mapping reaches its maximal power and minimal estimation errors. A single large-sibship family does not necessarily produce the maximal power for detection of quantitative trait loci (QTL) due to genetic sampling of QTL alleles. The QTL allelic model has a marked impact on efficiency of genetic mapping of categorical traits in terms of statistical power and QTL parameter estimation. Compared with a fixed number of QTL alleles (two or four), the model with an infinite number of QTL alleles and normally distributed allelic effects results in loss of statistical power. The results imply that inbred designs (e.g. F2 or four-way crosses) with a few QTL alleles segregating or reducing number of QTL alleles (e.g. by selection) in outbred populations are desirable in genetic mapping of categorical traits using data from multiple families. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

11.
类群取样与系统发育分析精确度之探索   总被引:6,自引:2,他引:4  
Appropriate and extensive taxon sampling is one of the most important determinants of accurate phylogenetic estimation. In addition, accuracy of inferences about evolutionary processes obtained from phylogenetic analyses is improved significantly by thorough taxon sampling efforts. Many recent efforts to improve phylogenetic estimates have focused instead on increasing sequence length or the number of overall characters in the analysis, and this often does have a beneficial effect on the accuracy of phylogenetic analyses. However, phylogenetic analyses of few taxa (but each represented by many characters) can be subject to strong systematic biases, which in turn produce high measures of repeatability (such as bootstrap proportions) in support of incorrect or misleading phylogenetic results. Thus, it is important for phylogeneticists to consider both the sampling of taxa, as well as the sampling of characters, in designing phylogenetic studies. Taxon sampling also improves estimates of evolutionary parameters derived from phylogenetic trees, and is thus important for improved applications of phylogenetic analyses. Analysis of sensitivity to taxon inclusion, the possible effects of long-branch attraction, and sensitivity of parameter estimation for model-based methods should be a part of any careful and thorough phylogenetic analysis. Furthermore, recent improvements in phylogenetic algorithms and in computational power have removed many constraints on analyzing large, thoroughly sampled data sets. Thorough taxon sampling is thus one of the most practical ways to improve the accuracy of phylogenetic estimates, as well as the accuracy of biological inferences that are based on these phylogenetic trees.  相似文献   

12.
In a population intended for breeding and selection, questions of interest relative to a specific segregating QTL are the variance it generates in the population, and the number and effects of its alleles. One approach to address these questions is to extract several inbreds from the population and use them to generate multiple mapping families. Given random sampling of parents, sampling strategy may be an important factor determining the power of the analysis and its accuracy in estimating QTL variance and allelic number. We describe appropriate multiple-family QTL mapping methodology and apply it to simulated data sets to determine optimal sampling strategies in terms of family number versus family size. Genomes were simulated with seven chromosomes, on which 107 markers and six QTL were distributed. The total heritability was 0.60. Two to ten alleles were segregating at each QTL. Sampling strategies ranged from sampling two inbreds and generating a single family of 600 progeny to sampling 40 inbreds and generating 40 families of 15 progeny each. Strategies involving only one to five families were subject to variation due to the sampling of inbred parents. For QTL where more than two alleles were segregating, these strategies did not sample QTL alleles representative of the original population. Conversely, strategies involving 30 or more parents were subject to variation due to sampling of QTL genotypes within the small families obtained. Given these constraints, greatest QTL detection power was obtained for strategies involving five to ten mapping families. The most accurate estimation of the variance generated by the QTL, however, was obtained with strategies involving 20 or more families. Finally, strategies with an intermediate number of families best estimated the number of QTL alleles. We conclude that no overall optimal sampling strategy exists but that the strategy adopted must depend on the objective.Communicated by P. Langridge  相似文献   

13.
In this paper the problem of reliable and accurate parameter estimation for unstructured models is considered. It is illustrated how a theoretically optimal design can be successfully translated into a practically feasible, robust, and informative experiment. The well-known parameter estimation problem of Monod kinetic parameters is used as a vehicle to illustrate our approach. As known for a long time, noisy batch measurements do not allow for unique and accurate estimation of the kinetic parameters of the Monod model. Techniques of optimal experiment design are, therefore, exploited to design informative experiments and to improve the parameter estimation accuracy. During the design process, practical feasibility has to be kept in mind. The designed experiments are easy to implement in practice and do not require additional monitoring equipment. Both design and experimental validation of informative fed batch experiments are illustrated with a case study, namely, the growth of the nitrogen-fixing bacteria Azospirillum brasilense.  相似文献   

14.
Parameter estimation is a critical problem in modeling biological pathways. It is difficult because of the large number of parameters to be estimated and the limited experimental data available. In this paper, we propose a decompositional approach to parameter estimation. It exploits the structure of a large pathway model to break it into smaller components, whose parameters can then be estimated independently. This leads to significant improvements in computational efficiency. We present our approach in the context of Hybrid Functional Petri Net modeling and evolutionary search for parameter value estimation. However, the approach can be easily extended to other modeling frameworks and is independent of the search method used. We have tested our approach on a detailed model of the Akt and MAPK pathways with two known and one hypothesized crosstalk mechanisms. The entire model contains 84 unknown parameters. Our simulation results exhibit good correlation with experimental data, and they yield positive evidence in support of the hypothesized crosstalk between the two pathways.  相似文献   

15.
Barabesi L  Pisani C 《Biometrics》2002,58(3):586-592
In practical ecological sampling studies, a certain design (such as plot sampling or line-intercept sampling) is usually replicated more than once. For each replication, the Horvitz-Thompson estimation of the objective parameter is considered. Finally, an overall estimator is achieved by averaging the single Horvitz-Thompson estimators. Because the design replications are drawn independently and under the same conditions, the overall estimator is simply the sample mean of the Horvitz-Thompson estimators under simple random sampling. This procedure may be wisely improved by using ranked set sampling. Hence, we propose the replicated protocol under ranked set sampling, which gives rise to a more accurate estimation than the replicated protocol under simple random sampling.  相似文献   

16.
The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.  相似文献   

17.
Rapidly developing sequencing technologies and declining costs have made it possible to collect genome‐scale data from population‐level samples in nonmodel systems. Inferential tools for historical demography given these data sets are, at present, underdeveloped. In particular, approximate Bayesian computation (ABC) has yet to be widely embraced by researchers generating these data. Here, we demonstrate the promise of ABC for analysis of the large data sets that are now attainable from nonmodel taxa through current genomic sequencing technologies. We develop and test an ABC framework for model selection and parameter estimation, given histories of three‐population divergence with admixture. We then explore different sampling regimes to illustrate how sampling more loci, longer loci or more individuals affects the quality of model selection and parameter estimation in this ABC framework. Our results show that inferences improved substantially with increases in the number and/or length of sequenced loci, while less benefit was gained by sampling large numbers of individuals. Optimal sampling strategies given our inferential models included at least 2000 loci, each approximately 2 kb in length, sampled from five diploid individuals per population, although specific strategies are model and question dependent. We tested our ABC approach through simulation‐based cross‐validations and illustrate its application using previously analysed data from the oak gall wasp, Biorhiza pallida.  相似文献   

18.
We consider the problem of reconstructing near-perfect phylogenetic trees using binary character states (referred to as BNPP). A perfect phylogeny assumes that every character mutates at most once in the evolutionary tree, yielding an algorithm for binary character states that is computationally efficient but not robust to imperfections in real data. A near-perfect phylogeny relaxes the perfect phylogeny assumption by allowing at most a constant number of additional mutations. We develop two algorithms for constructing optimal near-perfect phylogenies and provide empirical evidence of their performance. The first simple algorithm is fixed parameter tractable when the number of additional mutations and the number of characters that share four gametes with some other character are constants. The second, more involved algorithm for the problem is fixed parameter tractable when only the number of additional mutations is fixed. We have implemented both algorithms and shown them to be extremely efficient in practice on biologically significant data sets. This work proves the BNPP problem fixed parameter tractable and provides the first practical phylogenetic tree reconstruction algorithms that find guaranteed optimal solutions while being easily implemented and computationally feasible for data sets of biologically meaningful size and complexity.  相似文献   

19.
Radiolabeled low density lipoprotein (LDL) is commonly used to study the turnover of LDL apolipoprotein B (apoB), the major protein component of LDL. Following an intravenous injection of radioiodinated LDL, typical sampling schedules have including 20-25 samples over a 14-day period with frequent sampling during the first 12 hr and daily samples thereafter. This is a burdensome task for subjects and investigators. To improve acceptance of the procedure, we have examined the effects of reduced sampling schedules upon the estimation of the fractional catabolic rate (FCR) for LDL apoB. Data from 36 different sets of LDL decay curves obtained from investigations of subjects with a variety of lipoprotein phenotypes have been used to test these schedules. Our results indicate that by choosing specific intervals over a 14-day period only 10 samples are sufficient to accurately determine the fractional catabolic rate for LDL in plasma. This reduced sampling schedule should facilitate the study of LDL turnover in large groups of subjects as outpatients.  相似文献   

20.
Robust and efficient design of experiments for the Monod model   总被引:1,自引:0,他引:1  
In this paper the problem of designing experiments for the Monod model, which is frequently used in microbiology, is studied. The model is defined implicitly by a differential equation and has numerous applications in microbial growth kinetics, environmental research, pharmacokinetics, and plant physiology. The designs presented so far in the literature are local optimal designs, which depend sensitively on a preliminary guess of the unknown parameters, and are for this reason in many cases not robust with respect to their misspecification. Uniform designs and maximin optimal designs are considered as a strategy to obtain robust and efficient designs for parameter estimation. In particular, standardized maximin D- and E-optimal designs are determined and compared with uniform designs, which are usually applied in these microbiological models. It is demonstrated that maximin optimal designs are substantially more efficient than uniform designs. Parameter variances can be decreased by a factor of two by simply sampling at optimal times during the experiment. Moreover, the maximin optimal designs usually provide the possibility for the experimenter to check the model assumptions, because they have more support points than parameters in the Monod model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号