首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
经过推导,得出小麦赤霉病的两类药剂防治指标模型.在保证品质、收支有余的条件下,第一类模型附加产投比较佳、净收益最佳原则,第二类模型附加产投比最佳原则.灵敏度分析和合理性分析证明了模型的正确性.模型可做为小麦赤霉病药剂防治的决策工具.同时,给出了两类模型的BASIC计算程序.  相似文献   

2.
本文在前人工作基础上提出了一个离子吸收动力学新模型(综合型抑制作用模型),并推导出其速率方程.该模型能将现有的离子吸收相互作用模型(竞争性抑制作用模型,反竞争性抑制模型和非竞争性抑制作用模型)统一在该模型之中.笔者用该模型很好地解释了七例有关离子吸收相互作用试验结果.  相似文献   

3.
Miami模型的生态学应用   总被引:16,自引:0,他引:16  
刘洪杰 《生态科学》1997,16(1):52-55
Miami模型是第一个用环境变量估算全球净初级生产力的数学模型.但常被批评为缺乏理论基础.经过分析可以发现其基本模型结构是符合生态学原理的.Miami模型的估算精度不高,主要是由于变量选取不当和单因子模型独立估算造成的.针对其固有的缺陷,可以通过优选环境变量和建立复合模型的方法加以改进,达到提高精度的目的.改进后的模型在宏观大尺度生产力空间分布在模拟上仍有生命力  相似文献   

4.
从机理分析的角度研究了吸入性麻醉药使用过程中在诱导期的数学模型,采用室分析方法,建立了人体血药浓度的药代动力学模型和吸入药量模型,并求出其精确解.然后以七氟谜用药累积记录为研究对象,进行了数值计算,算得药代动力学模型和吸入药量模型与实际测量数据的相关系数分别为0.9865和0.8874.最后给出模型拟合曲线.  相似文献   

5.
基于细胞自动机方法,建立了一种简单的实体肿瘤免疫性系统模型,用来模拟实体肿瘤生长过程中的情况.我们基于二维随机、离散细胞自动机计算方法,在生物细胞水平上,建立了一种肿瘤免疫性系统生长模型.该模型考虑了免疫反应的宏观性质,描绘了癌细胞和机体免疫系统之间的细胞相互作用,表现了围绕在肿瘤机体周围,免疫系统细胞的免疫作用发展情况.我们研究了常规细胞自动机模型,设计出实体肿瘤模型随机演化生长规则.最终,构建离散细胞自动机肿瘤免疫性系统模型,并做了相应的数值仿真试验.通过对仿真模型不同参数的调整,最终的仿真结果表明,该模型能反映肿瘤免疫性系统基本的相关性质.  相似文献   

6.
非线性再生散度随机效应模型包括了非线性随机效应模型和指数族非线性随机效应模型等.通过视模型中的随机效应为假想的缺失数据和应用Metropolis-Hastings(简称MH) 算法,提出了模型参数极大似然估计的随机逼近算法.模拟研究和实例分析表明了该算法的可行性.  相似文献   

7.
特征捆绑的计算模型   总被引:2,自引:0,他引:2  
特征捆绑问题一直是认知科学和神经科学中的一个重要问题.本文通过将噪声神经元模型的思想、贝叶斯方法和脉冲神经网络模型相结合,并引入竞争机制,提出了一个特征捆绑的计算模型.Bayesian Linking Field模型,对视感知中的特征捆绑问题进行研究.实验证明本研究的模型很好地完成了视觉感知中特征捆绑的任务,并给感知研究带来了新的思路.  相似文献   

8.
陆地碳循环研究中的模型方法   总被引:23,自引:3,他引:20  
陆地碳循环是全球变化研究中的重要内容,碳循环模型已成为研究陆地碳循环的必要方法.其中气候变化、大气CO2浓度上升以及人类活动引起的土地利用和土地覆盖变化导致陆地生态系统在结构、功能、组成和分布等方面的变化及其反馈关系对陆地碳循环的影响是模型模拟的关键问题.生物地理模型和生物地球化学模型是碳循环模型的两大类型,建模方法、模型性质、特点和应用范围各异.碳循环模型的发展方向是综合两类模型的特点,建立全球动态碳循环模型.  相似文献   

9.
一类微生物食物链模型的持续生存   总被引:1,自引:0,他引:1  
讨论了一类n维微生物食物链模型的持续生存问题,假设模型中微生物的增长率具有Monod内在代谢形式.证明了模型有一个解平面.并且模型所有的解以此解平面为ω极限集,因此所有的解是有界的;并证明了满足一定条件时,n维食物链模型是持久的.  相似文献   

10.
应用物元评判识别模型预测农业害虫种群动态   总被引:1,自引:0,他引:1  
论述了物元评判识别模型的建模方法,并探讨了该模型在预测农业害虫种群动态方面的应用.结果表明,其历史拟合率高达 92.31%,将 1995年作为独立样本进行试报,其预报结果与实际发生一致.并指出物元评判模型是预测农业害虫种群动态的一种优良模型.  相似文献   

11.
In this paper, error analysis of three-dimensional marker coordinates reconstructed from noisy two-dimensional measurement in RSA was performed. Mathematical models to predict error propagation of focus position and object points were derived and computer simulations were performed to validate these models. Two clinical calibration cages were compared by testing the error propagation at each RSA step. The results revealed that errors of reconstructed object points were related to the focus position error, two-dimensional measurement error, position of focus and positions of object points, while errors of reconstructed focus position were determined by the two-dimensional measurement error, number of control points and location of the focus. The maximum difference between the mathematical model and the simulation for the assessment of errors of focus position was 14 microm and the maximum difference of object point positions was 1.1 microm. These differences were small and judged irrelevant, hence the simulations indicated that our models were accurate.  相似文献   

12.
Huang X  Tebbs JM 《Biometrics》2009,65(3):710-718
Summary .  We consider structural measurement error models for a binary response. We show that likelihood-based estimators obtained from fitting structural measurement error models with pooled binary responses can be far more robust to covariate measurement error in the presence of latent-variable model misspecification than the corresponding estimators from individual responses. Furthermore, despite the loss in information, pooling can provide improved parameter estimators in terms of mean-squared error. Based on these and other findings, we create a new diagnostic method to detect latent-variable model misspecification in structural measurement error models with individual binary response. We use simulation and data from the Framingham Heart Study to illustrate our methods.  相似文献   

13.
Likelihood analysis for regression models with measurement errors in explanatory variables typically involves integrals that do not have a closed-form solution. In this case, numerical methods such as Gaussian quadrature are generally employed. However, when the dimension of the integral is large, these methods become computationally demanding or even unfeasible. This paper proposes the use of the Laplace approximation to deal with measurement error problems when the likelihood function involves high-dimensional integrals. The cases considered are generalized linear models with multiple covariates measured with error and generalized linear mixed models with measurement error in the covariates. The asymptotic order of the approximation and the asymptotic properties of the Laplace-based estimator for these models are derived. The method is illustrated using simulations and real-data analysis.  相似文献   

14.
Summary .   Missing data, measurement error, and misclassification are three important problems in many research fields, such as epidemiological studies. It is well known that missing data and measurement error in covariates may lead to biased estimation. Misclassification may be considered as a special type of measurement error, for categorical data. Nevertheless, we treat misclassification as a different problem from measurement error because statistical models for them are different. Indeed, in the literature, methods for these three problems were generally proposed separately given that statistical modeling for them are very different. The problem is more challenging in a longitudinal study with nonignorable missing data. In this article, we consider estimation in generalized linear models under these three incomplete data models. We propose a general approach based on expected estimating equations (EEEs) to solve these three incomplete data problems in a unified fashion. This EEE approach can be easily implemented and its asymptotic covariance can be obtained by sandwich estimation. Intensive simulation studies are performed under various incomplete data settings. The proposed method is applied to a longitudinal study of oral bone density in relation to body bone density.  相似文献   

15.
Motivated by an important biomarker study in nutritional epidemiology, we consider the combination of the linear mixed measurement error model and the linear seemingly unrelated regression model, hence Seemingly Unrelated Measurement Error Models. In our context, we have data on protein intake and energy (caloric) intake from both a food frequency questionnaire (FFQ) and a biomarker, and wish to understand the measurement error properties of the FFQ for each nutrient. Our idea is to develop separate marginal mixed measurement error models for each nutrient, and then combine them into a larger multivariate measurement error model: the two measurement error models are seemingly unrelated because they concern different nutrients, but aspects of each model are highly correlated. As in any seemingly unrelated regression context, the hope is to achieve gains in statistical efficiency compared to fitting each model separately. We show that if we employ a "full" model (fully parameterized), the combination of the two measurement error models leads to no gain over considering each model separately. However, there is also a scientifically motivated "reduced" model that sets certain parameters in the "full" model equal to zero, and for which the combination of the two measurement error models leads to considerable gain over considering each model separately, e.g., 40% decrease in standard errors. We use the Akaike information criterion to distinguish between the two possibilities, and show that the resulting estimates achieve major gains in efficiency. We also describe theoretical and serious practical problems with the Bayes information criterion in this context.  相似文献   

16.
Laboratory mice provide a versatile experimental model for studies of skeletal biomechanics. In order to determine the strength of the mouse skeleton, mechanical testing has been performed on a variety of bones using several procedures. Because of differences in testing methods, the data from previous studies are not comparable. The purpose of this study was to determine which long bone provides the values closest to the published material properties of bone, while also providing reliable and reproducible results. To do this, the femur, humerus, third metatarsal, radius, and tibia of both the low bone mass C57BL/6H (B6) and high bone mass C3H/HeJ (C3H) mice were mechanically tested under three-point bending. The biomechanical tests showed significant differences between the bones and between mouse strains for the five bones tested (p < 0.05). Computational models of the femur, metatarsal, and radius were developed to visualize the types of measurement error inherent in the three-point bending tests. The models demonstrated that measurement error arose from local deformation at the loading point, shear deformation and ring-type deformation of the cylindrical cross-section. Increasing the aspect ratio (bone length/width) improved the measurement of Young's modulus of the bone for both mouse strains (p < 0.01). Bones with the highest aspect ratio and largest cortical thickness to radius ratio were better for bending tests since less measurement error was observed in the computational models. Of the bones tested, the radius was preferred for mechanical testing because of its high aspect ratio, minimal measurement error, and low variability.  相似文献   

17.
It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus on proportional hazards (PH) models. Recently, research interest has extended to accelerated failure time (AFT) and additive hazards (AH) models. However, the impact of measurement error on other models, such as the proportional odds model, has received relatively little attention, although these models are important alternatives when PH, AFT, or AH models are not appropriate to fit data. In this paper, we investigate this important problem and study the bias induced by the naive approach of ignoring covariate measurement error. To adjust for the induced bias, we describe the simulation‐extrapolation method. The proposed method enjoys a number of appealing features. Its implementation is straightforward and can be accomplished with minor modifications of existing software. More importantly, the proposed method does not require modeling the covariate process, which is quite attractive in practice. As the precise values of error‐prone covariates are often not observable, any modeling assumption on such covariates has the risk of model misspecification, hence yielding invalid inferences if this happens. The proposed method is carefully assessed both theoretically and empirically. Theoretically, we establish the asymptotic normality for resulting estimators. Numerically, simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error, along with an application to a data set arising from the Busselton Health Study. Sensitivity of the proposed method to misspecification of the error model is studied as well.  相似文献   

18.
Abstract: Although previous research and theory has suggested that wild turkey (Meleagris gallopavo) populations may be subject to some form of density dependence, there has been no effort to estimate and incorporate a density-dependence parameter into wild turkey population models. To estimate a functional relationship for density dependence in wild turkey, we analyzed a set of harvest-index time series from 11 state wildlife agencies. We tested for lagged correlations between annual harvest indices using partial autocorrelation analysis. We assessed the ability of the density-dependent theta-Ricker model to explain harvest indices over time relative to exponential or random walk growth models. We tested the homogeneity of the density-dependence parameter estimates (θ) from 3 different harvest indices (spring harvest no. reported harvest/effort, survey harvest/effort) and calculated a weighted average based on each estimate's variance and its estimated covariance with the other indices. To estimate the potential bias in parameter estimates from measurement error, we conducted a simulation study using the theta-Ricker with known values and lognormally distributed measurement error. Partial autocorrelation function analysis indicated that harvest indices were significantly correlated only with their value at the previous time step. The theta-Ricker model performed better than the exponential growth or random walk models for all 3 indices. Simulation of known parameters and measurement error indicated a strong positive upward bias in the density-dependent parameter estimate, with increasing measurement error. The average density-dependence estimate, corrected for measurement error ranged 0.25 ≤ θC ≤ 0.49, depending on the amount of measurement error and assumed spring harvest rate. We infer that density dependence is nonlinear in wild turkey, where growth rates are maximized at 39-42% of carrying capacity. The annual yield produced by density-dependent population growth will tend to be less than that caused by extrinsic environmental factors. This study indicates that both density-dependent and density-independent processes are important to wild turkey population growth, and we make initial suggestions on incorporating both into harvest management strategies.  相似文献   

19.
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual–specific measurement error; Berkson–type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual–specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error.  相似文献   

20.
To present basic information on the interobserver precision and accuracy of 32 selected anthropometric measurement items, six observers measured each of 37 subjects once in two days. The data were analyzed by using ANOVA, and mean absolute bias, standard deviation of bias, and mean absolute bias in standard deviation unit were used as measures of bias. By comparing the results of the two days, the effects of the practice on measurement errors were also investigated. Variance was overestimated by more than 10% in five measurements. Interobserver error variance and random error variance were highly correlated with each other. Measures of the bias were significantly correlated with interobserver and especially with random error variances. The interobserver errors were drastically reduced on the second day in the measurement items in which the causes of the interobserver errors could be specified. It was speculated that even when the definitions of the landmarks and measurement items were clear, the ambiguity in the practical procedures in locating landmarks, applying instruments, and so on, permitted each observer to develop his or her own measurement technique, and it in turn caused interobserver errors. To minimize interobserver and random errors, the standardization of measurement technique should be extended to the details of the practical procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号