首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A method to quantify the error probability at the Kirchhoff-law-Johnson-noise (KLJN) secure key exchange is introduced. The types of errors due to statistical inaccuracies in noise voltage measurements are classified and the error probability is calculated. The most interesting finding is that the error probability decays exponentially with the duration of the time window of single bit exchange. The results indicate that it is feasible to have so small error probabilities of the exchanged bits that error correction algorithms are not required. The results are demonstrated with practical considerations.  相似文献   

2.
Question: Predictive vegetation modelling relies on the use of environmental variables, which are usually derived from abase data set with some level of error, and this error is propagated to any subsequently derived environmental variables. The question for this study is: What is the level of error and uncertainty in environmental variables based on the error propagated from a Digital Elevation Model (DEM) and how does it vary for both direct and indirect variables? Location: Kioloa region, New South Wales, Australia Methods: The level of error in a DEM is assessed and used to develop an error model for analysing error propagation to derived environmental variables. We tested both indirect (elevation, slope, aspect, topographic position) and direct (average air temperature, net solar radiation, and topographic wetness index) variables for their robustness to propagated error from the DEM. Results: It is shown that the direct environmental variable net solar radiation is less affected by error in the DEM than the indirect variables aspect and slope, but that regional conditions such as slope steepness and cloudiness can influence this outcome. However, the indirect environmental variable topographic position was less affected by error in the DEM than topographic wetness index. Interestingly, the results disagreed with the current assumption that indirect variables are necessarily less sensitive to propagated error because they are less derived. Conclusions: The results indicate that variables exhibit both systematic bias and instability under uncertainty. There is a clear need to consider the sensitivity of variables to error in their base data sets in addition to the question of whether to use direct or indirect variables.  相似文献   

3.
Summary A digital registration system used with temperature- and humidity-controlled cuvettes for net photosynthesis and transpiration measurements in the field is described. The associated errors of the measured parameters and calculated data are estimated. The digitalization is based on an analogue registration which is of primary importance in the control of experimental conditions in the cuvettes. The digital system is connected to the analogue registration in series. The error associated with digitalization is 0.1% across 70% of the scale. This error increases to 0.2% between 3 and 30% on the scale due to a minor lack of linearity. The reproducibility of the digitalization is ±0.024%.The error associated with data transfer in the digitalization and the errors of the analogue registration are estimated for temperature and humidity measurements (error of air and leaf temperature is ±0.1° C; error of the dew point temperature is ±1.1° C dew point). The effect of these errors on the calculation of relative humidity and the water vapour difference between the leaf and the air is determined using the progressive error law. At 30° C and 50% relative humidity, the error in relative humidity is ±7.4%, the error for the water vapour difference is ±6.6%. The dependence of these errors on temperature and humidity is shown.The instrument error of the net photosynthesis measurement is calculated to be ±4.2%. Transpiration measurements have an average inaccuracy of ±8.3%. The total diffusion resistance which is calculated from values of transpiration and the water vapour difference has an average error of ±10.9%. The sizeable influence of errors in humidity and temperature measurements on the calculated diffusion resistance is demonstrated. The additional influence of biological errors associated with field measurements is discussed.  相似文献   

4.
Ratio estimation with measurement error in the auxiliary variate   总被引:1,自引:0,他引:1  
Gregoire TG  Salas C 《Biometrics》2009,65(2):590-598
Summary .  With auxiliary information that is well correlated with the primary variable of interest, ratio estimation of the finite population total may be much more efficient than alternative estimators that do not make use of the auxiliary variate. The well-known properties of ratio estimators are perturbed when the auxiliary variate is measured with error. In this contribution we examine the effect of measurement error in the auxiliary variate on the design-based statistical properties of three common ratio estimators. We examine the case of systematic measurement error as well as measurement error that varies according to a fixed distribution. Aside from presenting expressions for the bias and variance of these estimators when they are contaminated with measurement error we provide numerical results based on a specific population. Under systematic measurement error, the biasing effect is asymmetric around zero, and precision may be improved or degraded depending on the magnitude of the error. Under variable measurement error, bias of the conventional ratio-of-means estimator increased slightly with increasing error dispersion, but far less than the increased bias of the conventional mean-of-ratios estimator. In similar fashion, the variance of the mean-of-ratios estimator incurs a greater loss of precision with increasing error dispersion compared with the other estimators we examine. Overall, the ratio-of-means estimator appears to be remarkably resistant to the effects of measurement error in the auxiliary variate.  相似文献   

5.
The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual–specific measurement error; Berkson–type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual–specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error.  相似文献   

6.
A common question in movement studies is how the results should be interpreted with respect to systematic and random errors. In this study, simulations are made in order to see how a rigid body's orientation in space (i.e. helical angle between two orientations) is affected by (1) a systematic error added to a single marker (2) a combination of this systematic error and Gaussian white noise. The orientation was estimated after adding a systematic error to one marker within the rigid body. This procedure was repeated with Gaussian noise added to each marker. In conclusion, results show that the systematic error's effect on estimated orientation depends on number of markers in the rigid body and also on which direction the systematic error is added. The systematic error has no effect if the error is added along the radial axis (i.e. the line connecting centre of mass and the affected marker).  相似文献   

7.
Optimal design of experiments as well as proper analysis of data are dependent on knowledge of the experimental error. A detailed analysis of the error structure of kinetic data obtained with acetylcholinesterase showed conclusively that the classical assumptions of constant absolute or constant relative error are inadequate for the dependent variable (velocity). The best mathematical models for the experimental error involved the substrate and inhibitor concentrations and reflected the rate law for the initial velocity. Data obtained with other enzymes displayed similar relationships between experimental error and the independent variables. The new empirical error functions were shown superior to previously used models when utilized in weighted non-linear-regression analysis of kinetic data. The results suggest that, in the spectrophotometric assays used in the present study, the observed experimental variance is primarily due to errors in determination of the concentrations of substrate and inhibitor and not to error in measuring the velocity.  相似文献   

8.
A common question in movement studies is how the results should be interpreted with respect to systematic and random errors. In this study, simulations are made in order to see how a rigid body's orientation in space (i.e. helical angle between two orientations) is affected by (1) a systematic error added to a single marker (2) a combination of this systematic error and Gaussian white noise. The orientation was estimated after adding a systematic error to one marker within the rigid body. This procedure was repeated with Gaussian noise added to each marker.

In conclusion, results show that the systematic error's effect on estimated orientation depends on number of markers in the rigid body and also on which direction the systematic error is added. The systematic error has no effect if the error is added along the radial axis (i.e. the line connecting centre of mass and the affected marker).  相似文献   

9.
Species occurrences inherently include positional error. Such error can be problematic for species distribution models (SDMs), especially those based on fine-resolution environmental data. It has been suggested that there could be a link between the influence of positional error and the width of the species ecological niche. Although positional errors in species occurrence data may imply serious limitations, especially for modelling species with narrow ecological niche, it has never been thoroughly explored. We used a virtual species approach to assess the effects of the positional error on fine-scale SDMs for species with environmental niches of different widths. We simulated three virtual species with varying niche breadth, from specialist to generalist. The true distribution of these virtual species was then altered by introducing different levels of positional error (from 5 to 500 m). We built generalized linear models and MaxEnt models using the distribution of the three virtual species (unaltered and altered) and a combination of environmental data at 5 m resolution. The models’ performance and niche overlap were compared to assess the effect of positional error with varying niche breadth in the geographical and environmental space. The positional error negatively impacted performance and niche overlap metrics. The amplitude of the influence of positional error depended on the species niche, with models for specialist species being more affected than those for generalist species. The positional error had the same effect on both modelling techniques. Finally, increasing sample size did not mitigate the negative influence of positional error. We showed that fine-scale SDMs are considerably affected by positional error, even when such error is low. Therefore, where new surveys are undertaken, we recommend paying attention to data collection techniques to minimize the positional error in occurrence data and thus to avoid its negative effect on SDMs, especially when studying specialist species.  相似文献   

10.
人类观测误差是植被测量中不可避免的一个问题。我们量化了与高草草原植被长期监测相关的观测者间误差的四个组成部分:忽略误 差、误识别误差、谨慎误差和估计误差。由于观察者会产生误差,我们还评估了地块大小与伪周转率的关系,以及对比了物种组成和丰度的伪变化与四年间植被变化之间的关系。这项研究是在美国堪萨斯州的高草草原国家保护区进行的。监测点包括10个地块,每个地块由一系列的四个嵌套框架(0.01, 0.1, 1和10 m2)组成。在每个嵌套框架中记录了所有的草本物种,并且在10 m2的空间尺度下,视觉估计了7个覆盖类别内的叶面覆盖。总共调查了300个地块(30个地点),并随机选择28个地块重新进行测量以评估观测者的误差。所有的调查由四名观测者分两组完成。研究结果表明,在10 m2空间尺度上,由忽略误差引起的伪周转率平均为18.6%,而由误识别误差和谨慎误差引起的伪周转率平均值分别为1.4%和0.6%。尽管由重新定位引起的误差可能也起一定的作用,由忽略误差导致的伪周转率随样地面积的减小而增 加。物种组成在四年期间的变化(排除潜在的误识别误差和谨慎误差)为30.7%,其中包括由忽略误差和实际变化引起的伪周转率。18.6%的忽略误差表明四年期间的实际变化只有12.1%。对于估计误差,26.2%会记录为不同的覆盖等级。在四年的时间内,46.9%的记录显示了不同的覆盖等级,这表明两个时间段间覆盖率变化的56%是由于观测者误差造成的。  相似文献   

11.
Hao K  Li C  Rosenow C  Hung Wong W 《Genomics》2004,84(4):623-630
Currently, most analytical methods assume all observed genotypes are correct; however, it is clear that errors may reduce statistical power or bias inference in genetic studies. We propose procedures for estimating error rate in genetic analysis and apply them to study the GeneChip Mapping 10K array, which is a technology that has recently become available and allows researchers to survey over 10,000 SNPs in a single assay. We employed a strategy to estimate the genotype error rate in pedigree data. First, the "dose-response" reference curve between error rate and the observable error number were derived by simulation, conditional on given pedigree structures and genotypes. Second, the error rate was estimated by calibrating the number of observed errors in real data to the reference curve. We evaluated the performance of this method by simulation study and applied it to a data set of 30 pedigrees genotyped using the GeneChip Mapping 10K array. This method performed favorably in all scenarios we surveyed. The dose-response reference curve was monotone and almost linear with a large slope. The method was able to estimate accurately the error rate under various pedigree structures and error models and under heterogeneous error rates. Using this method, we found that the average genotyping error rate of the GeneChip Mapping 10K array was about 0.1%. Our method provides a quick and unbiased solution to address the genotype error rate in pedigree data. It behaves well in a wide range of settings and can be easily applied in other genetic projects. The robust estimation of genotyping error rate allows us to estimate power and sample size and conduct unbiased genetic tests. The GeneChip Mapping 10K array has a low overall error rate, which is consistent with the results obtained from alternative genotyping assays.  相似文献   

12.
Andreas Lindén  Jonas Knape 《Oikos》2009,118(5):675-680
Within the paradigm of population dynamics a central task is to identify environmental factors affecting population change and to estimate the strength of these effects. We here investigate the impact of observation errors in measurements of population densities on estimates of environmental effects. Adding observation errors may change the autocorrelation of a population time series with potential consequences for estimates of effects of autocorrelated environmental covariates. Using Monte Carlo simulations, we compare the performance of maximum likelihood estimates from three stochastic versions of the Gompertz model (log–linear first order autoregressive model), assuming 1) process error only, 2) observation error only, and 3) both process and observation error (the linear state–space model on log‐scale). We also simulated population dynamics using the Ricker model, and evaluated the corresponding maximum likelihood estimates for process error models. When there is observation error in the data and the considered environmental variable is strongly autocorrelated, its estimated effect is likely to be biased when using process error models. The environmental effect is overestimated when the sign of the autocorrelations of the intrinsic dynamics and the environment are the same and underestimated when the signs differ. With non‐autocorrelated environmental covariates, process error models produce fairly exact point estimates as well as reliable confidence intervals for environmental effects. In all scenarios, observation error models produce unbiased estimates with reasonable precision, but confidence intervals derived from the likelihood profiles are far too optimistic if there is process error present. The safest approach is to use state–space models in presence of observation error. These are factors worthwhile to consider when interpreting earlier empirical results on population time series, and in future studies, we recommend choosing carefully the modelling approach with respect to intrinsic population dynamics and covariate autocorrelation.  相似文献   

13.
14.
OBJECTIVE: In affected sib pair studies without genotyped parents the effect of genotyping error is generally to reduce the type I error rate and power of tests for linkage. The effect of genotyping error when parents have been genotyped is unknown. We investigated the type I error rate of the single-point Mean test for studies in which genotypes of both parents are available. METHODS: Datasets were simulated assuming no linkage and one of five models for genotyping error. In each dataset, Mendelian-inconsistent families were either excluded or regenotyped, and then the Mean test applied. RESULTS: We found that genotyping errors lead to an inflated type I error rate when inconsistent families are excluded. Depending on the genotyping-error model assumed, regenotyping inconsistent families has one of several effects. It may produce the same type I error rate as if inconsistent families are excluded; it may reduce the type I error, but still leave an anti-conservative test; or it may give a conservative test. Departures of the type I error rate from its nominal level increase with both the genotyping error rate and sample size. CONCLUSION: We recommend that markers with high error rates either be excluded from the analysis or be regenotyped in all families.  相似文献   

15.
Hubisz MJ  Lin MF  Kellis M  Siepel A 《PloS one》2011,6(2):e17034
The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ~2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1-4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download.  相似文献   

16.
Most experiments are intended for the estimation of the size of effects rather than for the testing of a hypothesis of whether or not an effect occurs. Hypothesis testing is often inapplicable, is over-used and is likely to lead to misinterpretations of results. The two types of error possible in hypothesis testing are discussed. Whereas Type I error is usually examined as a matter of course, Type II error is almost always ignored. Investigations in which zero differences are important should recognise the possibility of Type II error in their interpretation. A nonsignificant result should not be interpreted as evidence of a lack of effect. Statistical significance is not synonymous with economic or scientific importance. The importance of choosing the most appropriate design is emphasised and some suggestions are made as to how important sources of error can be avoided.  相似文献   

17.
The purpose of this work is to quantify the effects that errors in genotyping have on power and the sample size necessary to maintain constant asymptotic Type I and Type II error rates (SSN) for case-control genetic association studies between a disease phenotype and a di-allelic marker locus, for example a single nucleotide polymorphism (SNP) locus. We consider the effects of three published models of genotyping errors on the chi-square test for independence in the 2 x 3 table. After specifying genotype frequencies for the marker locus conditional on disease status and error model in both a genetic model-based and a genetic model-free framework, we compute the asymptotic power to detect association through specification of the test's non-centrality parameter. This parameter determines the functional dependence of SSN on the genotyping error rates. Additionally, we study the dependence of SSN on linkage disequilibrium (LD), marker allele frequencies, and genotyping error rates for a dominant disease model. Increased genotyping error rate requires a larger SSN. Every 1% increase in sum of genotyping error rates requires that both case and control SSN be increased by 2-8%, with the extent of increase dependent upon the error model. For the dominant disease model, SSN is a nonlinear function of LD and genotyping error rate, with greater SSN for lower LD and higher genotyping error rate. The combination of lower LD and higher genotyping error rates requires a larger SSN than the sum of the SSN for the lower LD and for the higher genotyping error rate.  相似文献   

18.
Free energy calculated in simulations on the atomic level (Monte Carlo or Molecular Dynamics) has a systematic error, if the water shell surrounding a globular protein is finite. The error (“cluster error”) is equal to a difference of free energies obtained in simulations with an infinite and finite water shell. In this work a continuum dielectric model was used to estimate the “cluster error”. A multipole expansion of the estimate was performed for a water shell with a spherical outer boundary. The expansion has very simple form. Each term is a product of two functions, one of them depending only on the charge's conformation, and the other one only on dielectric properties of the system. There are two practical uses of the expansion. First, it may be used to estimate the “cluster error” in a simulation already made; second, it may be used to plan a simulation in such a way that the “cluster error” is minimal. Numerical values of the largest terms in the multipole expansion corresponding to a typical system in simulations of globular proteins are given.  相似文献   

19.
Pedigrees used in the analysis of genetic or medical data are usually ascertained from sources subject to a variety of errors including misidentification of individuals, faults in historical documents or record linkage, nonpaternity, and unidentified adoption. Genetic markers can be used to verify putative family and pedigree data through the search for inconsistencies, or genetic exclusions, between putative parents and offspring. The probability of observing an exclusion given the occurrence of an error depends upon the gene frequencies at the loci under study and the forms of error. In addition, inconsistencies can arise from laboratory errors in marker determination. Together, these problems make the proper statistical analysis of such data desirable. Here we give a model that specifies the combined effects of various kinds of pedigree error along with genetic marker error. This model allows the maximum-likelihood estimation of the rates of various forms of pedigree error and laboratory error from genetic marker data collected on putative families. The method is illustrated by applying it to data obtained from a South Pacific island population, Tokelau. From the observed distribution of genetic marker inconsistencies between the parents and offspring of putative families, derived from the extensive genealogy of this population, we are able to estimate that the error of a paternal link is 4%, the error of a maternal link is zero, and the overall system typing error is 1%.  相似文献   

20.
Summary Doubling time has been widely used to represent the growth pattern of cells. A traditional method for finding the doubling time is to apply gray-scaled cells, where the logarithmic transformed scale is used. As an alternative statistical method, the log-linear model was recently proposed, for which actual cell numbers are used instead of the transformed gray-scaled cells. In this paper, I extend the log-linear model and propose the extended log-linear model. This model is designed for extra-Poisson variation, where the log-linear model produces the less appropriate estimate of the doubling time. Moreover, I compare statistical properties of the gray-scaled method, the log-linear model, and the extended log-linear model. For this purpose, I perform a Monte Carlo simulation study with three data-generating models: the additive error model, the multiplicative error model, and the overdispersed Poisson model. From the simulation study, I found that the gray-scaled method highly depends on the normality assumption of the gray-scaled cells; hence, this method is appropriate when the error model is multiplicative with the log-normally distributed errors. However, it is less efficient for other types of error distributions, especially when the error model is additive or the errors follow the Poisson distribution. The estimated standard error for the doubling time is not accurate in this case. The log-linear model was found to be efficient when the errors follow the Poisson distribution or nearly Poisson distribution. The efficiency of the log-linear model was decreased accordingly as the overdispersion increased, compared to the extended log-linear model. When the error model is additive or multiplicative with Gamma-distributed errors, the log-linear model is more efficient than the gray-scaled method. The extended log-linear model performs well overall for all three data-generating models. The loss of efficiency of the extended log-linear model is observed only when the error model is multiplicative with log-normally distributed errors, where the gray-scaled method is appropriate. However, the extended log-linear model is more efficient than the log-linear model in this case.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号