首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Journal of Plant Growth Regulation - The original version of this article unfortunately contained an error in Dr. Andrzej Skoczowski’s affiliation. The author would like to correct the error...  相似文献   

2.
Journal of Plant Growth Regulation - The original version of this article unfortunately contained an error in Acknowledgement. The authors would like to correct the error with this erratum. The...  相似文献   

3.
In motor tasks, errors between planned and actual movements generally result in adaptive changes which reduce the occurrence of similar errors in the future. It has commonly been assumed that the motor adaptation arising from an error occurring on a particular movement is specifically associated with the motion that was planned. Here we show that this is not the case. Instead, we demonstrate the binding of the adaptation arising from an error on a particular trial to the motion experienced on that same trial. The formation of this association means that future movements planned to resemble the motion experienced on a given trial benefit maximally from the adaptation arising from it. This reflects the idea that actual rather than planned motions are assigned 'credit' for motor errors because, in a computational sense, the maximal adaptive response would be associated with the condition credited with the error. We studied this process by examining the patterns of generalization associated with motor adaptation to novel dynamic environments during reaching arm movements in humans. We found that these patterns consistently matched those predicted by adaptation associated with the actual rather than the planned motion, with maximal generalization observed where actual motions were clustered. We followed up these findings by showing that a novel training procedure designed to leverage this newfound understanding of the binding of learning to action, can improve adaptation rates by greater than 50%. Our results provide a mechanistic framework for understanding the effects of partial assistance and error augmentation during neurologic rehabilitation, and they suggest ways to optimize their use.  相似文献   

4.
Foot and toe clearance (TC) are used regularly to describe locomotor control for both clinical and basic research. However, accuracy of TC during obstacle crossing can be compromised by typical sample frequencies, which do not capture the frame when the foot is over the obstacle due to high limb velocities. The purpose of this study was to decrease the error of TC measures by increasing the spatial resolution of the toe trajectory with interpolation. Five young subjects stepped over an obstacle in the middle of an 8 m walkway. Position data were captured at 600 Hz as a gold standard signal (GS-600-Hz). The GS-600-Hz signal was downsampled to 60 Hz (DS-60-Hz). The DS-60-Hz was then interpolated by either upsampling or an algorithm. Error was calculated as the absolute difference in TC between GS-600-Hz and each of the remaining signals, for both the leading limb and the trailing limb. All interpolation methods reduced the TC error to a similar extent. Interpolation reduced the median error of trail TC from 5.4 to 1.1 mm; the maximum error was reduced from 23.4 to 4.2 mm (16.6-3.8%). The median lead TC error improved from 1.6 to 0.5 mm, and the maximum error improved from 9.1 to 1.8 mm (5.3-0.9%). Therefore, interpolating a 60 Hz signal is a valid technique to decrease the error of TC during obstacle crossing.  相似文献   

5.
The standard (STD) 5 × 5 hybrid median filter (HMF) was previously described as a nonparametric local backestimator of spatially arrayed microtiter plate (MTP) data. As such, the HMF is a useful tool for mitigating global and sporadic systematic error in MTP data arrays. Presented here is the first known HMF correction of a primary screen suffering from systematic error best described as gradient vectors. Application of the STD 5 × 5 HMF to the primary screen raw data reduced background signal deviation, thereby improving the assay dynamic range and hit confirmation rate. While this HMF can correct gradient vectors, it does not properly correct periodic patterns that may present in other screening campaigns. To address this issue, 1 × 7 median and a row/column 5 × 5 hybrid median filter kernels (1 × 7 MF and RC 5 × 5 HMF) were designed ad hoc, to better fit periodic error patterns. The correction data show periodic error in simulated MTP data arrays is reduced by these alternative filter designs and that multiple corrective filters can be combined in serial operations for progressive reduction of complex error patterns in a MTP data array.  相似文献   

6.
Georeferencing error is prevalent in datasets used to model species distributions, inducing uncertainty in covariate values associated with species occurrences that result in biased probability of occurrence estimates. Traditionally, this error has been dealt with at the data‐level by using only records with an acceptable level of error (filtering) or by summarizing covariates at sampling units by using measures of central tendency (averaging). Here we compare those previous approaches to a novel implementation of a Bayesian logistic regression with measurement error (ME), a seldom used method in species distribution modeling. We show that the ME model outperforms data‐level approaches on 1) specialist species and 2) when either sample sizes are small, the georeferencing error is large or when all georeferenced occurrences have a fixed level of error. Thus, for certain types of species and datasets the ME model is an effective method to reduce biases in probability of occurrence estimates and account for the uncertainty generated by georeferencing error. Our approach may be expanded for its use with presence‐only data as well as to include other sources of uncertainty in species distribution models.  相似文献   

7.
Quantifying continental scale carbon emissions from the oxidation of above‐ground plant biomass following land‐use change (LUC) is made difficult by the lack of information on how much biomass was present prior to vegetation clearing and on the timing and location of historical LUC. The considerable spatial variability of vegetation and the uncertainty of this variability leads to difficulties in predicting biomass C density (tC ha?1) prior to LUC. The issue of quantifying uncertainties in the estimation of land based sources and sinks of CO2, and the feasibility of reducing these uncertainties by further sampling, is critical information required by governments world‐wide for public policy development on climate change issues. A quantitative statistical approach is required to calculate confidence intervals (the level of certainty) of estimated cleared above‐ground biomass. In this study, a set of high‐quality observations of steady state above‐ground biomass from relatively undisturbed ecological sites across the Australian continent was combined with vegetation, topographic, climatic and edaphic data sets within a Geographical Information System. A statistical model was developed from the data set of observations to predict potential biomass and the standard error of potential biomass for all 0.05° (approximately 5 × 5 km) land grid cells of the continent. In addition, the spatial autocorrelation of observations and residuals from the statistical model was examined. Finally, total C emissions due to historic LUC to cultivation and cropping were estimated by combining the statistical model with a data set of fractional cropland area per land grid cell, fAc (Ramankutty & Foley 1998). Total C emissions from loss of above‐ground biomass due to cropping since European colonization of Australia was estimated to be 757 MtC. These estimates are an upper limit because the predicted steady state biomass may be less than the above‐ground biomass immediately prior to LUC because of disturbance. The estimated standard error of total C emissions was calculated from the standard error of predicted biomass, the standard error of fAc and the spatial autocorrelation of biomass. However, quantitative estimates of the standard error of fAc were unavailable. Thus, two scenarios were developed to examine the effect of error in fAc on the error in total C emissions. In the first scenario, in which fAc was regarded as accurate (i.e. a coefficient of variation, CV, of fAc = 0.0), the 95% confidence interval of the continental C emissions was 379–1135 MtC. In the second scenario, a 50% error in estimated cropland area was assumed (a CV of fAc = 0.50) and the estimated confidence interval increased to between 350 and 1294 MtC. The CV of C emissions for these two scenarios was 25% and 29%. Thus, while accurate maps of land‐use change contribute to decreasing uncertainty in C emissions from LUC, the major source of this uncertainty arises from the prediction accuracy of biomass C density. It is argued that, even with large sample numbers, the high cost of sampling biomass carbon may limit the uncertainty of above‐ground biomass to about a CV of 25%.  相似文献   

8.
Catz N  Dicke PW  Thier P 《Current biology : CB》2005,15(24):2179-2189
BACKGROUND: Cerebellar Purkinje cells (PC) generate two responses: the simple spike (SS), with high firing rates (>100 Hz), and the complex spike (CS), characterized by conspicuously low discharge rates (1-2 Hz). Contemporary theories of cerebellar learning suggest that the CS discharge pattern encodes an error signal that drives changes in SS activity, ultimately related to motor behavior. This then predicts that CS will discharge in relation to the error and at random once the error has been nulled by the new behavior. RESULTS: We tested this hypothesis with saccadic adaptation in macaque monkeys as a model of cerebellar-dependent motor learning. During saccadic adaptation, error information unconsciously changes the endpoint of a saccade prompted by a visual target that shifts its final position during the saccade. We recorded CS from PC of the posterior vermis before, during, and after saccadic adaptation. In clear contradiction to the "error signal" concept, we found that CS occurred at random before adaptation onset, i.e., when the error was maximal, and built up to a specific saccade-related discharge profile during the course of adaptation. This profile became most pronounced at the end of adaptation, i.e., when the error had been nulled. CONCLUSIONS: We suggest that CS firing may underlie the stabilization of a learned motor behavior, rather than serving as an electrophysiological correlate of an error.  相似文献   

9.
10.
《IRBM》2022,43(1):32-38
PurposeScanning switch keypads are commonly used as augmentative and alternative communication aids for people with complex communication needs. Low communication rates and high error rates are some of the problems that users of scanned keyboards encounter in daily use, which could lead to the abandonment of this type of technology. Thus, this study presents a new configuration system for scanned keyboards using visual feedback recovery time to help the user have a better performance and experience when using this type of alternative communication.MethodsThe system was developed in C#. The control switch is realized by means of a wireless USB mouse button. In order to evaluate the system, 42 participants transcribed sentences using a system based on our recovery delay approach, as well as a second time-delay-free scanning system for comparison purposes. We analyzed the input rate, success rate, and probability of error occurrence. These data were used to compare the performance of the two types of systems.ResultsThe results showed that synchronization and selection errors were lower in the group that used the system with a recovery delay than in the group that used the layout without.ConclusionThe error reduction provided by the recovery delay system, allows for a more stable and enjoyable user experience when using scanned keyboards to communicate. This contribution could improve the use of augmentative and alternative communication devices and therefore contribute to reducing the abandonment of this type of technology.  相似文献   

11.
Use of historical data and real-world evidence holds great potential to improve the efficiency of clinical trials. One major challenge is to effectively borrow information from historical data while maintaining a reasonable type I error and minimal bias. We propose the elastic prior approach to address this challenge. Unlike existing approaches, this approach proactively controls the behavior of information borrowing and type I errors by incorporating a well-known concept of clinically significant difference through an elastic function, defined as a monotonic function of a congruence measure between historical data and trial data. The elastic function is constructed to satisfy a set of prespecified criteria such that the resulting prior will strongly borrow information when historical and trial data are congruent, but refrain from information borrowing when historical and trial data are incongruent. The elastic prior approach has a desirable property of being information borrowing consistent, that is, asymptotically controls type I error at the nominal value, no matter that historical data are congruent or not to the trial data. Our simulation study that evaluates the finite sample characteristic confirms that, compared to existing methods, the elastic prior has better type I error control and yields competitive or higher power. The proposed approach is applicable to binary, continuous, and survival endpoints.  相似文献   

12.
Cox DG  Kraft P 《Human heredity》2006,61(1):10-14
Deviation from Hardy-Weinberg equilibrium has become an accepted test for genotyping error. While it is generally considered that testing departures from Hardy-Weinberg equilibrium to detect genotyping error is not sensitive, little has been done to quantify this sensitivity. Therefore, we have examined various models of genotyping error, including error caused by neighboring SNPs that degrade the performance of genotyping assays. We then calculated the power of chi-square goodness-of-fit tests for deviation from Hardy-Weinberg equilibrium to detect such error. We have also examined the affects of neighboring SNPs on risk estimates in the setting of case-control association studies. We modeled the power of departure from Hardy-Weinberg equilibrium as a test to detect genotyping error and quantified the effect of genotyping error on disease risk estimates. Generally, genotyping error does not generate sufficient deviation from Hardy-Weinberg equilibrium to be detected. As expected, genotyping error due to neighboring SNPs attenuates risk estimates, often drastically. For the moment, the most widely accepted method of detecting genotyping error is to confirm genotypes by sequencing and/or genotyping via a separate method. While these methods are fairly reliable, they are also costly and time consuming.  相似文献   

13.
Response to     
We are responding to a Letter to the Editor addressing the Method section of our paper “Different measures of ‘genome-wide’ DNA methylation exhibit unique properties in placental and somatic tissues.” The letter raised concerns that the protocol for Epigentek’s MethylFlash kit was followed incorrectly based on the wording of an online publication of our article. We admittedly made an error in the language used to describe the MethylFlash protocol in our initial submission and thus this was corrected as soon as it was brought to our attention. However, the error was only in language and not procedure. We are confident that the protocol was followed as stated in the insert provided with the MethylFlashTM Methylated DNA Quantification kit (Colorimetric).  相似文献   

14.
Kinematic data from rigid segment foot models inevitably includes errors because the bones within each segment move relative to each other. This study sought to define error in foot kinematic data due to violation of the rigid segment assumption. The research compared kinematic data from 17 different mid and forefoot rigid segment models to kinematic data of the individual bones comprising these segments. Kinematic data from a previous dynamic cadaver model study was used to derive individual bone as well as foot segment kinematics.Mean and maximum errors due to violation of the rigid body assumption varied greatly between models. The model with least error was the combination of navicular and cuboid (mean errors <=1.3°, average maximum error <=2.4°). Greatest error was seen for the model combining all the ten bones (mean errors <=4.4°, average maximum errors <=6.9°). Based on the errors reported a three segment mid and forefoot model is proposed: (1) Navicular and cuboid, (2) cuneiforms and metatarsals 1, 2 and 3, and (3) metatarsals 4 and 5. However the utility of this model will depend on the precise purpose of the in vivo foot kinematics research study being undertaken.  相似文献   

15.
The cup nerve head, optic cup, optic disc ratio and neural rim configuration are observed as important for detecting glaucoma at an early stage in clinical practice. The main clinical indicator of glaucoma optic cup to disc ratio is currently determined manually by limiting the mass screening was potential. This paper proposes the following methods for an automatic cup to disc ratio determination. In the first part of the work, fundus image of the optic disc region is considered. Clustering means K is used automatically to extract the optic disc whereas K-value is automatically selected by algorithm called hill climbing. The segmented contour of optic cup has been smoothened by two methods namely elliptical fitting and morphological fitting. Cup to disc ratio is calculated for 50 normal images and 50 fundus images of glaucoma patients. Throughout this paper, the same set of images has been used and for these images, the cup to disc ratio values are provided by ophthalmologist which is taken as the gold standard value. The error is calculated with reference to this gold standard value throughout the paper for cup to disc ratio comparison. The mean error of the K-means clustering method for elliptical and morphological fitting is 4.5% and 4.1%, respectively. Since the error is high, fuzzy C-mean clustering has been chosen and the mean error of the method for elliptical and morphological fitting is 3.83% and 3.52%. The error can further be minimized by considering the inter pixel relation. To achieve another algorithm is by Spatially Weighted fuzzy C-means Clustering (SWFCM) is used. The optic disc and optic cup have clustered and segmented by SWFCM Clustering. The SWFCM mean error clustering method for elliptical and morphological fitting is 3.06% and 1.67%, respectively. In this work fundus images were collected from Aravind eye Hospital, Pondicherry.  相似文献   

16.
Population abundances are rarely, if ever, known. Instead, they are estimated with some amount of uncertainty. The resulting measurement error has its consequences on subsequent analyses that model population dynamics and estimate probabilities about abundances at future points in time. This article addresses some outstanding questions on the consequences of measurement error in one such dynamic model, the random walk with drift model, and proposes some new ways to correct for measurement error. We present a broad and realistic class of measurement error models that allows both heteroskedasticity and possible correlation in the measurement errors, and we provide analytical results about the biases of estimators that ignore the measurement error. Our new estimators include both method of moments estimators and "pseudo"-estimators that proceed from both observed estimates of population abundance and estimates of parameters in the measurement error model. We derive the asymptotic properties of our methods and existing methods, and we compare their finite-sample performance with a simulation experiment. We also examine the practical implications of the methods by using them to analyze two existing population dynamics data sets.  相似文献   

17.
G C Wei  M A Tanner 《Biometrics》1991,47(4):1297-1309
The first part of the article reviews the Data Augmentation algorithm and presents two approximations to the Data Augmentation algorithm for the analysis of missing-data problems: the Poor Man's Data Augmentation algorithm and the Asymptotic Data Augmentation algorithm. These two algorithms are then implemented in the context of censored regression data to obtain semiparametric methodology. The performances of the censored regression algorithms are examined in a simulation study. It is found, up to the precision of the study, that the bias of both the Poor Man's and Asymptotic Data Augmentation estimators, as well as the Buckley-James estimator, does not appear to differ from zero. However, with regard to mean squared error, over a wide range of settings examined in this simulation study, the two Data Augmentation estimators have a smaller mean squared error than does the Buckley-James estimator. In addition, associated with the two Data Augmentation estimators is a natural device for estimating the standard error of the estimated regression parameters. It is shown how this device can be used to estimate the standard error of either Data Augmentation estimate of any parameter (e.g., the correlation coefficient) associated with the model. In the simulation study, the estimated standard error of the Asymptotic Data Augmentation estimate of the regression parameter is found to be congruent with the Monte Carlo standard deviation of the corresponding parameter estimate. The algorithms are illustrated using the updated Stanford heart transplant data set.  相似文献   

18.
OBJECTIVES: This is the first of two articles discussing the effect of population stratification on the type I error rate (i.e., false positive rate). This paper focuses on the confounding risk ratio (CRR). It is accepted that population stratification (PS) can produce false positive results in case-control genetic association. However, which values of population parameters lead to an increase in type I error rate is unknown. Some believe PS does not represent a serious concern, whereas others believe that PS may contribute to contradictory findings in genetic association. We used computer simulations to estimate the effect of PS on type I error rate over a wide range of disease frequencies and marker allele frequencies, and we compared the observed type I error rate to the magnitude of the confounding risk ratio. METHODS: We simulated two populations and mixed them to produce a combined population, specifying 160 different combinations of input parameters (disease prevalences and marker allele frequencies in the two populations). From the combined populations, we selected 5000 case-control datasets, each with either 50, 100, or 300 cases and controls, and determined the type I error rate. In all simulations, the marker allele and disease were independent (i.e., no association). RESULTS: The type I error rate is not substantially affected by changes in the disease prevalence per se. We found that the CRR provides a relatively poor indicator of the magnitude of the increase in type I error rate. We also derived a simple mathematical quantity, Delta, that is highly correlated with the type I error rate. In the companion article (part II, in this issue), we extend this work to multiple subpopulations and unequal sampling proportions. CONCLUSION: Based on these results, realistic combinations of disease prevalences and marker allele frequencies can substantially increase the probability of finding false evidence of marker disease associations. Furthermore, the CRR does not indicate when this will occur.  相似文献   

19.
After the publication of [1], we were alerted to an error in our data. The error was an one-off miscalculation in the extraction of position information for our set of true negatives. Our data set should have used randomly selected non-edited cytosines (C) as true negatives, but the data generation phase resulted in a set of nucleotides that were each one nucleotide downstream of known, unedited cytosines. The consequences of this error are reflected in changes to our results, although the general conclusions presented in our original publication remain largely unchanged.  相似文献   

20.
The purpose of this work is to quantify the effects that errors in genotyping have on power and the sample size necessary to maintain constant asymptotic Type I and Type II error rates (SSN) for case-control genetic association studies between a disease phenotype and a di-allelic marker locus, for example a single nucleotide polymorphism (SNP) locus. We consider the effects of three published models of genotyping errors on the chi-square test for independence in the 2 x 3 table. After specifying genotype frequencies for the marker locus conditional on disease status and error model in both a genetic model-based and a genetic model-free framework, we compute the asymptotic power to detect association through specification of the test's non-centrality parameter. This parameter determines the functional dependence of SSN on the genotyping error rates. Additionally, we study the dependence of SSN on linkage disequilibrium (LD), marker allele frequencies, and genotyping error rates for a dominant disease model. Increased genotyping error rate requires a larger SSN. Every 1% increase in sum of genotyping error rates requires that both case and control SSN be increased by 2-8%, with the extent of increase dependent upon the error model. For the dominant disease model, SSN is a nonlinear function of LD and genotyping error rate, with greater SSN for lower LD and higher genotyping error rate. The combination of lower LD and higher genotyping error rates requires a larger SSN than the sum of the SSN for the lower LD and for the higher genotyping error rate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号