首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
刘玮  辛美丽  吕芳  刘梦侠  丁刚  吴海一 《生态学报》2018,38(6):2031-2040
鼠尾藻是潮间带海藻床的主要构建者,但何种统计模型更适合鼠尾藻的数量分布研究目前尚不清楚。选取山东荣成内遮岛15个25m2区域进行了调查和数量统计,比较了算数平均模型、反距离权重模型及普通克里金模型的精度差异,分析了群体密度、丛生指数及盖度等因素对模型统计精度的影响。结果表明,反距离权重模型表现较为稳定、平均误差最低(平均绝对误差39.1株,均方根误差53.3株,偏差率13.0%),而算术平均模型的精度波动最大、平均误差最高(平均绝对误差53.8株,均方根误差65.3株,偏差率14.6%)。群体密度和盖度因素对模型精度无明显影响(P0.05),但丛生指数能显著影响3种模型的平均绝对误差和均方根误差(P0.05)。研究表明,3种模型精度差异并不明显,模型精度在一些指标上受丛生指数影响。总体来看,反距离权重模型和普通克里金模型稳定性较好,误差均值较小,且均能够反映鼠尾藻群体的空间分布,因而在鼠尾藻群体数量分布计算中具有一定优势。  相似文献   

2.
地表死可燃物含水率是火险天气和火行为预报中的重要指标.本研究基于时滞平衡含水率法(Nelson和Simard方法)及气象要素回归方法,于2010年9—10月对黑龙江省大兴安岭地区盘古林场不同郁闭度的山杨-白桦混交林、红皮云杉纯林,以及采伐迹地(原1∶1樟子松-白桦混交林)地表死可燃物含水率进行以小时为步长的连续测定,建立其预测模型,得到预测误差,并使用相应的模型对其他林分地表死可燃物含水率进行外推精度分析.结果表明:采用Nelson平衡含水率法构建的地表死可燃物含水率变化模型的平均绝对误差、平均相对误差和均方误差根(0.0154、0.104和0.0226)低于Simard法(0.0185、0.117和0.0256)和气象要素回归法(0.0222、0.150和0.0331).在外推效果方面,气象要素回归法的平均绝对误差、平均相对误差和均方误差根(0.0410、0.0300和0.0740)低于Simard法(0.610、0.492和0.846),但前两者均高于Nelson法(0.034、0.021和0.0660),说明以小时为步长的时滞平衡含水率法,尤其是Nelson法适用于大兴安岭地区所测林分.外推虽不能降低误差,但有助于提高现有模型应用至不同林分条件或大尺度范围内的地表死可燃物含水率预测精度和利用率.模型建模和外推误差与不同树种和郁闭度条件差异有关,研究时应根据不同林分和地点选择合适的平衡含水率模型.  相似文献   

3.
For patients with patterns ranging out of anthropometric standard values, patient-specific musculoskeletal modelling becomes crucial for clinical diagnosis and follow-up. However, patient-specific modelling using imaging techniques and motion capture systems is mainly subject to experimental errors. The aim of this study was to quantify these experimental errors when performing a patient-specific musculoskeletal model. CT scan data were used to personalise the geometrical model and its inertial properties for a post polio residual paralysis subject. After having performed a gait-based experimental protocol, kinematics data were measured using a VICON motion capture system with six infrared cameras. The musculoskeletal model was computed using a direct/inverse algorithm (LifeMod software). A first source of errors was identified in the segmentation procedure in relation to the calculation of personalised inertial parameters. The second source of errors was subject related, as it depended on the reproducibility of performing the same type of gait. The impact of kinematics, kinetics and muscle forces resulting from the musculoskeletal modelling was quantified using relative errors and the absolute root mean square error. Concerning the segmentation procedure, we found that the kinematics results were not sensitive to the errors (relative error < 1%). However, a strong influence was noted on the kinetics results (deviation up to 71%). Furthermore, the reproducibility error showed a significant influence (relative mean error varying from 5 to 30%). The present paper demonstrates that in patient-specific musculoskeletal modelling variations due to experimental errors derived from imaging techniques and motion capture need to be both identified and quantified. Therefore, the paper can be used as a guideline.  相似文献   

4.
In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment.  相似文献   

5.
For patients with patterns ranging out of anthropometric standard values, patient-specific musculoskeletal modelling becomes crucial for clinical diagnosis and follow-up. However, patient-specific modelling using imaging techniques and motion capture systems is mainly subject to experimental errors. The aim of this study was to quantify these experimental errors when performing a patient-specific musculoskeletal model. CT scan data were used to personalise the geometrical model and its inertial properties for a post polio residual paralysis subject. After having performed a gait-based experimental protocol, kinematics data were measured using a VICON motion capture system with six infrared cameras. The musculoskeletal model was computed using a direct/inverse algorithm (LifeMod software). A first source of errors was identified in the segmentation procedure in relation to the calculation of personalised inertial parameters. The second source of errors was subject related, as it depended on the reproducibility of performing the same type of gait. The impact of kinematics, kinetics and muscle forces resulting from the musculoskeletal modelling was quantified using relative errors and the absolute root mean square error. Concerning the segmentation procedure, we found that the kinematics results were not sensitive to the errors (relative error<1%). However, a strong influence was noted on the kinetics results (deviation up to 71%). Furthermore, the reproducibility error showed a significant influence (relative mean error varying from 5 to 30%). The present paper demonstrates that in patient-specific musculoskeletal modelling variations due to experimental errors derived from imaging techniques and motion capture need to be both identified and quantified. Therefore, the paper can be used as a guideline.  相似文献   

6.
Rice husk, a lignocellulosic by-product of the agroindustry, was treated with alkali and used as a low-cost adsorbent for the removal of safranin from aqueous solution in batch adsorption procedure. In order to estimate the equilibrium parameters, the equilibrium adsorption data were analyzed using the following two-parameter isotherms: Freundlich, Langmuir, and Temkin. A comparison of linear and nonlinear regression methods in selecting the optimum adsorption isotherm was applied on the experimental data. Six linearized isotherm models (including four linearized Langmuir models) and three nonlinear isotherm models are thus discussed in this paper. In order to determine the best-fit isotherm predicted by each method, seven error functions namely, coefficient of determination (r 2), the sum of the squares of the errors (SSE), sum of the absolute errors (SAE), average relative error (ARE), hybrid fractional error-function (HYBRID), Marquardt's percent standard deviation (MPSD), and the chi-square test (χ2) were used. It was concluded that the nonlinear method is a better way to obtain the isotherm parameters and the data were in good agreement with the Langmuir isotherm model.  相似文献   

7.
PurposeThe purpose of this study is to employ magnetic fluid hyperthermia simulations in the precise computation of Specific Absorption Rate functions -SAR(T)-, and in the evaluation of the predictive capacity of different SAR calculation methods.MethodsMagnetic fluid hyperthermia experiments were carried out using magnetite-based nanofluids. The respective SAR values were estimated through four different calculation methods including the initial slope method, the Box-Lucas method, the corrected slope method and the incremental analysis method (INCAM). A novel numerical model combining the heat transfer equations and the Navier-Stokes equations was developed to reproduce the experimental heating process. To address variations in heating efficiency with temperature, the expression of the power dissipation as a Gaussian function of temperature was introduced and the Levenberg-Marquardt optimization algorithm was employed to compute the function parameters and determine the function’s effective branch within each measurement’s temperature range. The power dissipation function was then reduced to the respective SAR function.ResultsThe INCAM exhibited the lowest relative errors ranging between 0.62 and 15.03% with respect to the simulations. SAR(T) functions exhibited significant variations, up to 45%, within the MFH-relevant temperature range.ConclusionsThe examined calculation methods are not suitable to accurately quantify the heating efficiency of a magnetic fluid. Numerical models can be exploited to effectively compute SAR(T) and contribute to the development of robust hyperthermia treatment planning applications.  相似文献   

8.
The use of model-fitting in the interpretation of 'dual' uptake isotherms   总被引:2,自引:0,他引:2  
Abstract. Published data of the concentration dependence of the uptake rate (uptake isotherms) of K+ Na+, Cl?, SO2?4, and L-lysine in barley roots, and glucose and 3-O-methylglucose in potato tuber tissue, were re-examined. In as much as these isotherms yield non-linear, concave upward Eadie-Hofstee plots, they might have been termed ‘dual’ isotherms. In addition, all these isotherms have been considered to display discontinuous transitions in gradient. The following models that yield continuous isotherms were fitted to the isotherms: (1) the sum of a Michaelis-Menten term and a linear term; (2) the sum of two Michaelis-Menten terms; (3) the sum of two Michaelis-Menten terms and a linear one. Goodness of fit was judged from: (i) the weighted mean square of deviates; (ii) the standard errors of the kinetic parameters; (iii) the algebraic significance of the terms; (iv) a Rankits plot of the residuals; (v) a Runs test on the residuals. For the precise and detailed isotherms of SO2? uptake, only model (3) gave a fit that was satisfactory in all respects. There appeared to be no reason to consider these isotherms as multiphasic. The same conclusion was reached for the L-lysine uptake isotherms. For the other isotherms the results were less conclusive. Thai for K+ and Na+ could, at any rate, be described satisfactorily by a continuous model, the best fit being obtained with model (2). The uptake isotherms of Cl? and 3-0-methylglucose could be best described by model (2), and that of glucose by model (3), only the result of the Runs test being unsatisfactory. It is concluded that there is hardly any evidence that the presumed ‘jumps’ or discontinuities or inflections in the gradient of uptake isotherms are not due to experimental error in the data. It is suggested that many uptake isotherms may be described by model (3), although the reason for this is still incompletely understood.  相似文献   

9.
PurposeTo examine whether it is essential to apply correction factors for ion recombination (kS) to percentage depth dose (PDD) measurements and to the volume-averaging effect (kvol) to ensure accurate absolute dose calibration for flattening filter-free (FFF) beams for the most commonly used ionization chambers.MethodsWe surveyed medical physicists worldwide (n = 159) to identify the five most common ionization chamber combinations used for absolute and relative reference dosimetry of FFF beams. We then assessed the overall absolute dose calibration error for FFF beams of the Artiste Siemens and TrueBeam Varian linear accelerators resulting from failing to apply correction factors kS in the PDD(10) and the volume-averaging effect (kvol) to such chamber combinations.ResultsAll the chamber combinations examined—the Farmer PTW 30013 ionization chamber used for absolute dosimetry, and the PTW 31010, PTW 30013, IBA CC04, IBA CC13, and PTW 31021 ionization chambers used for PDD curves measurements—showed non-negligible errors (≥0.5%). The largest error (1.6%) was found for the combination of the Farmer PTW 30013 chamber with the IBA CC13 chamber, which was the most widely used chamber combination in our survey.ConclusionsBased on our findings, we strongly recommend assessing the impact of failing to apply correction factors kS in the PDD(10) and kvol prior to using any chamber type for FFF beam reference dosimetry purposes.  相似文献   

10.
A general Akaike-type criterion for model selection in robust regression   总被引:2,自引:0,他引:2  
BURMAN  P.; NOLAN  D. 《Biometrika》1995,82(4):877-886
Akaike's procedure (1970) for selecting a model minimises anestimate of the expected squared error in predicting new, independentobservations. This selection criterion was designed for modelsfitted by least squares. A different model-fitting technique,such as least absolute deviation regression, requires an appropriatemodel selection procedure. This paper presents a general Akaike-typecriterion applicable to a wide variety of loss functions formodel fitting. It requires only that the function be convexwith a unique minimum, and twice differentiable in expectation.Simulations show that the estimators proposed here well approximatetheir respective prediction errors.  相似文献   

11.
12.
Abstract Mac Nally (1996), in describing the application of ‘hierarchical partitioning’ in regression modelling of species richness of breeding passerine birds with response variable the species count, rejects the use of Poisson regression in favour of normal-errors regression on an incorrect basis. Mac Nally uses a function of the residual sum of squares, the root-mean square prediction error (RMSPE), calculated from predictions from each regression and rejects the Poisson regression because its RMSPE was 20% larger. This note points out that the RMSPE will always be larger for the Poisson regression, given the same link function and linear predictor is used, even if the response is truly Poisson. References to appropriate methods of determining the most suitable response distribution and link function in the context of generalized linear models are given.  相似文献   

13.
IntroductionThe acromion marker cluster (AMC) is a non-invasive scapular motion tracking method. However, it lacks testing in clinical populations, where unique challenges may present. This investigation resolved the utility of the AMC approach in a compromised clinical population.MethodsThe upper body of breast cancer survivors (BCS) and controls were tracked via motion capture and scapular landmarks palpated and recorded using a digitizer at static neutral to maximum elevation postures. The AMC tracked the scapula during dynamic maximum arm abduction. Both single (SC) and double calibration (DC) methods were applied to calculate scapular angles. The influences of calibration method, elevation, and group on mean and absolute error with two-way fixed ANOVAs with interactions (p < 0.05). Root mean square errors (RMSE) were calculated and compared.ResultsDC improved AMC estimation of palpated scapular orientation over SC, especially at higher arm elevations; RMSE averaged 11° higher for SC than DC at maximum elevation, but the methods were only 2.2° different at 90° elevation. DC of the AMC yielded mean error values of ∼5–10°. These approximate errors reported for AMC with young, lean adults.ConclusionsThe AMC with DC is a non-invasive method with acceptable error for measuring scapular motion of BCS and age-matched controls.  相似文献   

14.
The performance of four parameter estimating procedures for the estimation of the adjustable parameters in the Michaelis-Menten model, the maximum initial rate Vmax, and the Michaelis-Menten constant Km, including Lineweaver & Burk transformation (L-B), Eadie & Hofstee transformation (E-H), Eisenthal & Cornish-Bowden transformation (ECB), and Hsu & Tseng random search (H-T) is compared. The analysis of the simulated data reveals the followings: (i) Vmax can be estimated more precisely than Km. (ii) The sum of square errors, from the smallest to the largest, follows the sequence H-T, E-H, ECB, L-B. (iii) Considering the sum of square errors, relative error, and computing time, the overall performance follows the sequence H-T, L-B, E-H, ECB, from the best to the worst. (iv) The performance of E-H and ECB are on the same level. (v) L-B and E-H are appropriate for pricesly measured data. H-T should be adopted for data whose error level are high. (vi) Increasing the number of data points has a positive effect on the performance of H-T, and a negative effect on the performance of L-B, E-H, and ECB.  相似文献   

15.
基于实码遗传算法的湖泊水质模型参数优化   总被引:1,自引:0,他引:1  
郭静  陈求稳  张晓晴  李伟峰 《生态学报》2012,32(24):7940-7947
参数的合理取值决定着模型的模拟效果,因此确定研究区域的模型结构后,需要对模型的参数进行优化.湖泊水质模型(Simulation by means of an Analytical Lake Model,SALMO)利用常微分方程描述湖泊的营养物质循环和食物链动态,考虑了多个生态过程,包含104个参数.由于参数较多,不适宜采用传统参数优化方法进行优化.利用太湖梅梁湾2005年数据,采用实码遗传算法优化了SALMO模型中相对敏感的参数,运用优化后的模型,模拟了梅梁湾2006年的水质.对比分析参数优化前后模型的效果表明遗传算法能高效地对SALMO进行参数优化,优化后的模拟精度得到了显著提高,能更好地模拟梅梁湾的水质变化.  相似文献   

16.
Motion capture of all degrees of freedom of the hand collected during performance of daily living activities remains challenging. Instrumented gloves are an attractive option because of their higher ease of use. However, subject-specific calibration of gloves is lengthy and has limitations for individuals with disabilities. Here, a calibration procedure is presented, consisting in the recording of just a simple hand position so as to allow capture of the kinematics of 16 hand joints during daily life activities even in case of severe injured hands. ‘across-subject gains’ were obtained by averaging the gains obtained from a detailed subject-specific calibration involving 44 registrations that was repeated three times on multiple days to 6 subjects. In additional 4 subjects, joint angles that resulted from applying the ‘across-subject calibration’ or the subject-specific calibration were compared. Global errors associated with the ‘across-subject calibration’ relative to the detailed, subject-specific protocol were small (bias: 0.49°; precision: 4.45°) and comparable to those that resulted from repeating the detailed protocol with the same subject on multiple days (0.36°; 3.50°). Furthermore, in one subject, performance of the ‘across-subject calibration’ was directly compared to another fast calibration method, expressed relative to a videogrammetric protocol as a gold-standard, yielding better results.  相似文献   

17.
Abstract Genotyping error, often associated with low‐quantity/quality DNA samples, is an important issue when using genetic tags to estimate abundance using capture‐mark‐recapture (CMR). dropout , an MS‐Windows program, identifies both loci and samples that likely contain errors affecting CMR estimates. dropout uses a ‘bimodal test’, that enumerates the number of loci different between each pair of samples, and a ‘difference in capture history test’ (DCH) to determine those loci producing the most errors. Importantly, the DCH test allows one to determine that a data set is error‐free. dropout has been evaluated in McKelvey & Schwartz (2004) and is now available online.  相似文献   

18.
19.
Many scientific instruments utilise multiple element detectors, e.g. CCD's or photodiode arrays, to monitor the change in a position of an optical pattern. For example. instruments for affinity biosensing based on surface plasmon resonance (SPR) or resonant mirror are equipped with such detectors. An important and desired property of these bioanalytical instruments is that the calculation of the movement or change in shape follows the true change. This is often not the case and it may lead to linearity errors, and to sensitivity errors. The sensitivity is normally defined as the slope of the calibration curve. A new parameter is introduced to account for the linearity errors, the sensitivity deviation, defined as the deviation from the undistorted slope of the calibration curve. The linearity error and the sensitivity deviation are intimately related and the sensitivity deviation may lead to misinterpretation of kinetic data, mass transport limitations and concentration analyses. Because the linearity errors are small (e.g. 10 pg/mm2 of biomolecules on the sensor surface) with regard to the dynamic range (e.g. 30,000 pg/mm2), they can be difficult to discover. However, the linearity errors are often not negligible with regard to a typical response (e.g. 0-100 pg/mm2). and may therefore cause serious problems. A method for detecting linearity errors is outlined. Further on, this paper demonstrates how integral linearity errors of less than 1% can result in a sensitivity deviation of 10%, a value that in our opinion cannot be ignored in biospecific interaction analysis (BIA). It should also be stressed out that this phenomenon also occurs in other instruments using array detectors.  相似文献   

20.
The aim of this paper was to model the hand trajectory during grasping by an extension in 3D of the 2D written language beta-elliptic model. The interest of this model is that it takes into account both geometric and velocity information. The method relies on the decomposition of the task space trajectories in elementary bricks. The latter is characterized by a velocity profile modelled with beta functions and a geometry modelled with elliptic shapes. A data base of grasping movements has been constructed and the errors of reconstruction were assessed (distance and curvature) considering two variations of the beta-elliptic model (‘quarter ellipse’ and ‘two tangents points’ method). The results showed that the method based on two tangent points outperforms the quarter ellipse method with average and maximum relative errors of 2.73% and 8.62%, respectively, and a maximum curvature error of 9.26% for the former. This modelling approach can find interesting application to characterize the improvement due to a rehabilitation or teaching process by a quantitative measurement of hand trajectory parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号