首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present an evaluation of the accuracy and precision of relaxation rates calculated using a variety of methods, applied to data sets obtained for several very different protein systems. We show that common methods of data evaluation, such as the determination of peak heights and peak volumes, may be subject to bias, giving incorrect values for quantities such as R1 and R2. For example, one common method of peak-height determination, using a search routine to obtain the peak-height maximum in successive spectra, may be a source of significant systematic error in the relaxation rate. The alternative use of peak volumes or of a fixed coordinate position for the peak height in successive spectra gives more accurate results, particularly in cases where the signal/noise is low, but these methods have inherent problems of their own. For example, volumes are difficult to quantitate for overlapped peaks. We show that with any method of sampling the peak intensity, the choice of a 2- or 3-parameter equation to fit the exponential relaxation decay curves can dramatically affect both the accuracy and precision of the calculated relaxation rates. In general, a 2-parameter fit of relaxation decay curves is preferable. However, for very low intensity peaks a 3 parameter fit may be more appropriate.  相似文献   

2.
《IRBM》2022,43(2):130-141
Background and ObjectiveAs is known, point clouds representing the objects are frequently used in object registration. Although the objects can be registered by using all the points in the corresponding point clouds of the objects, the registration process can also be achieved with a smaller number of the landmark points selected from the entire point clouds of the objects. This paper introduces a research study focusing on the fast and accurate rigid registration of the bilateral proximal femurs in bilateral hip joint images by using the random sub-sample points. For this purpose, Random Point Sub-sampling (RPS) was analyzed and the reduced point sets were used for an accurate registration of the bilateral proximal femurs in coronal hip joint magnetic resonance imaging (MRI) slices.MethodsIn registration, bilateral proximal femurs in MRI slices were registered rigidly by performing a process consisting of three main phases named as MR image preprocessing, proximal femur registration over the random sub-sample points and MR image postprocessing. In the stage of the MR image preprocessing, segmentation maps of the bilateral proximal femurs are obtained as region of interest (RoI) images from the entire MRI slices and then, the edge maps of the segmented proximal femurs are extracted. In the registration phase, the edge maps describing the proximal femur surfaces are represented as point clouds initially. Thereafter, the RPS is performed on the proximal femur point clouds and the number of points representing the proximal femurs is reduced at different ratios. For the registration of the point clouds, the Iterative Closest Point (ICP) algorithm is performed on the reduced sets of points. Finally, the registration procedures are completed by performing MR image postprocessing on the registered proximal femur images.ResultsIn performance evaluation tests performed on healthy and pathological proximal femurs in 13 bilateral coronal hip joint MRI slices of 13 Legg-Calve-Perthes disease (LCPD) patients, bilateral proximal femurs were successfully registered with very small error rates by using the reduced set of points obtained via the RPS and promising results were achieved. The minimum error rate was observed at RPS rate of 30% as the value of 0.41 (±0.31)% on all over the bilateral proximal femurs evaluated. When the range of RPS rate of 20-30% is considered as the reference, the elapsed time in registration can be reduced by almost 30-40% compared to the case where all the proximal femur points were included in registration. Additionally, it was observed that the RPS rate should be selected as at least 25% to achieve a successful registration with an error rate below 1%.ConclusionIt was concluded from the observed results that a more successful and faster registration can be accomplished by selecting fewer points randomly from the point sets of proximal femurs instead of using all the points describing the proximal femurs. Not only an accurate registration with low error rates was performed, but also a faster registration process was performed by means of the limited number of points that are sub-sampled randomly from the whole point sets.  相似文献   

3.
A straightforward and empirical regression method based on a logarithmic approximation has been developed to accurately estimate initial rates from nonlinear progress curves of enzyme reactions. The principle of this parametric approach is to use a relatively large number of observations, while averaging out random errors, to predict the curvature at time zero, which has the highest rate of change. The usual linear regression of a few initial time points lacks prediction power at time zero and therefore underestimates the true initial rate. Application of this nonlinear regression approach to enzyme reactions demonstrated satisfactory results. This approach is less subjective in choosing initial time points to be used for rate determination, and much more robust to random errors. Moreover, it is relatively easy to realize with commonly available software.  相似文献   

4.
Hardegree SP 《Annals of botany》2006,97(6):1115-1125
BACKGROUND AND AIMS: The purpose of this study was to compare the relative accuracy of different thermal-germination models in predicting germination-time under constant-temperature conditions. Of specific interest was the assessment of shape assumptions associated with the cardinal-temperature germination model and probit distribution often used to distribute thermal coefficients among seed subpopulations. METHODS: The seeds of four rangeland grass species were germinated over the constant-temperature range of 3-38 degrees C and monitored for subpopulation variability in germination-rate response. Subpopulation-specific germination rate was estimated as a function of temperature and residual model error for three variations of the cardinal-temperature model, non-linear regression and piece-wise linear regression. The data were used to test relative model fit under alternative assumptions regarding model shape. KEY RESULTS: In general, optimal model fit was obtained by limiting model-shape assumptions. All models were relatively accurate in the sub-optimal temperature range except in the 3 degrees C treatment where predicted germination times were in error by as much as 70 d for the cardinal-temperature models. CONCLUSIONS: Germination model selection should be driven by research objectives. Cardinal-temperature models yield coefficients that can be directly compared for purposes of screening germplasm. Other model formulations, however, may be more accurate in predicting germination-time, especially at low temperatures where small errors in predicted rate can result in relatively large errors in germination time.  相似文献   

5.
IntroductionIncreased access to remote sensing datasets presents opportunities to model an animal's in-situ experience of the landscape to study behavior and test hypotheses such as geomagnetic map navigation. MagGeo is an open-source tool that combines high spatiotemporal resolution geomagnetic data with animal tracking data. Unlike gridded remote sensing data, satellite geomagnetic data are point-based measurements of the magnetic field at the location of each satellite. MagGeo converts these measurements into geomagnetic values at an animal's location and time. The objective of this paper is to evaluate different interpolation methods and data frameworks within the MagGeo software and quantify how accurately MagGeo can model geomagnetic values and patterns as experienced by animals.MethodWe tested MagGeo outputs against data from 109 terrestrial geomagnetic observatories across 7 years. Unlike satellite data, ground-based data are more likely to represent how animals near the Earth's surface experience geomagnetic field dynamics. Within the MagGeo framework, we compared an inverse-distance weighting interpolation with three different nearest-neighbour interpolation methods. We also compared model geomagnetic data with combined model and satellite data in their ability to capture geomagnetic fluctuations. Finally, we fit a linear mixed-effect model to understand how error is influenced by factors like geomagnetic activity and distance in space and time between satellite and point of interest.Results and conclusionsThe overall absolute difference between MagGeo outputs and observatory values was <1% of the total possible range of values for geomagnetic components. Satellite data measurements closest in time to the point of interest consistently had lowest error which likely reflects the ability of the nearest neighbour in time interpolation method to capture small continuous daily fluctuations and larger discrete events like geomagnetic storms. Combined model and satellite data also capture geomagnetic fluctuations better than model data alone across most geomagnetic activity levels. Our linear mixed-effect models suggest that most of the variation in error can be explained by location-specific effects originating largely from local crustal biases, and that high geomagnetic activity usually predicts higher error though ultimately remaining within the 1% error range. Our results indicate that MagGeo can help researchers explore how animals may use the geomagnetic field to navigate long distances by providing access to data and methods that accurately model how animals moving near the Earth's surface experience the geomagnetic field.  相似文献   

6.
7.
Phylogenetic trees inferred from sequence data often have branch lengths measured in the expected number of substitutions and therefore, do not have divergence times estimated. These trees give an incomplete view of evolutionary histories since many applications of phylogenies require time trees. Many methods have been developed to convert the inferred branch lengths from substitution unit to time unit using calibration points, but none is universally accepted as they are challenged in both scalability and accuracy under complex models. Here, we introduce a new method that formulates dating as a nonconvex optimization problem where the variance of log-transformed rate multipliers is minimized across the tree. On simulated and real data, we show that our method, wLogDate, is often more accurate than alternatives and is more robust to various model assumptions.  相似文献   

8.
Yu Z  Schaid DJ 《Human genetics》2007,122(5):495-504
For large-scale genotyping studies, it is common for most subjects to have some missing genetic markers, even if the missing rate per marker is low. This compromises association analyses, with varying numbers of subjects contributing to analyses when performing single-marker or multi-marker analyses. In this paper, we consider eight methods to infer missing genotypes, including two haplotype reconstruction methods (local expectation maximization-EM, and fastPHASE), two k-nearest neighbor methods (original k-nearest neighbor, KNN, and a weighted k-nearest neighbor, wtKNN), three linear regression methods (backward variable selection, LM.back, least angle regression, LM.lars, and singular value decomposition, LM.svd), and a regression tree, Rtree. We evaluate the accuracy of them using single nucleotide polymorphism (SNP) data from the HapMap project, under a variety of conditions and parameters. We find that fastPHASE has the lowest error rates across different analysis panels and marker densities. LM.lars gives slightly less accurate estimate of missing genotypes than fastPHASE, but has better performance than the other methods.  相似文献   

9.
AimTo evaluate the computation time efficiency of the multithreaded code (G4Linac-MT) in the dosimetry application, using the high performance of the HPC-Marwan grid to determine with high accuracy the initial parameters of the 6 MV photon beam of Varian CLINAC 2100C.BackgroundThe difficulty of Monte Carlo methods is the long computation time, this is one of the disadvantages of the Monte Carlo methods.Materials and methodsCalculations are performed by the multithreaded code G4Linac-MT and Geant4.10.04.p02 using the HPC-Marwan computing grid to evaluate the computing speed for each code. The multithreaded version is tested in several CPUs to evaluate the computing speed according to the number of CPUs used. The results were compared to the measurements using different types of comparisons, TPR20.10, penumbra, mean dose error and gamma index.ResultsThe results obtained for this work indicate a much higher computing time saving for the G4Linac-MT version compared to the Geant4.10.04 version, the computing time decreases with the number of CPUs used, can reach about 12 times if 64CPUs are used. After optimization of the initial electron beam parameters, the results of the dose simulations obtained for this work are in very good agreement with the experimental measurements with a mean dose error of up to 0.41% on the PDDs and 1.79% on the lateral dose.ConclusionsThe gain in computation time leads us to perform Monte Carlo simulations with a large number of events which gives a high accuracy of the dosimetry results obtained in this work.  相似文献   

10.
Many missing-value (MV) imputation methods have been developed for microarray data, but only a few studies have investigated the relationship between MV imputation and classification accuracy. Furthermore, these studies are problematic in fundamental steps such as MV generation and classifier error estimation. In this work, we carry out a model-based study that addresses some of the issues in previous studies. Six popular imputation algorithms, two feature selection methods, and three classification rules are considered. The results suggest that it is beneficial to apply MV imputation when the noise level is high, variance is small, or gene-cluster correlation is strong, under small to moderate MV rates. In these cases, if data quality metrics are available, then it may be helpful to consider the data point with poor quality as missing and apply one of the most robust imputation algorithms to estimate the true signal based on the available high-quality data points. However, at large MV rates, we conclude that imputation methods are not recommended. Regarding the MV rate, our results indicate the presence of a peaking phenomenon: performance of imputation methods actually improves initially as the MV rate increases, but after an optimum point, performance quickly deteriorates with increasing MV rates.  相似文献   

11.
PurposeTo establish the reliability and accuracy of a UNIQUE Linac in delivering RapidArc treatments and assess its long term stability.Materials and methodsUNIQUE performance was monitored and analyzed for a period of nearly two years. 2280 Dynalog files, related to 179 clinical RapidArc treatments were collected. Different tumor sites and dose scheduling were included, covering the full range of our treatment plans. Statistical distributions of MLC motion error, gantry rotation error and MU delivery error were evaluated. The stochastic and systematic nature of each error was investigated together with their variation in time.ResultsAll the delivery errors are found to be small and more stringent tolerances than those proposed by TG142 are suggested. Unlike MLC positional errors, where a linear relationship with leaf speed holds, other Volumetric Modulated Arc Therapy (VMAT) parameters reveal a random nature and, consequently, a reduced clinical relevance. MLC errors are linearly related only to leaf speed no matter the shape of the MLC apertures. Gantry rotation and MU delivery are as accurate as major competing Linacs. UNIQUE was found to be reliable and accurate throughout the investigation period, regardless of the specific tumor sites and fractionation schemes.ConclusionsThe accuracy of RapidArc treatments delivered with UNIQUE has been established. The stochastic nature of delivery errors is proven. Long term statistics of the delivery parameter errors do not show significant variations, confirming the reliability of the VMAT delivery system.  相似文献   

12.
1. The authors define a function with value 1 for the positive examples and 0 for the negative ones. They fit a continuous function but do not deal at all with the error margin of the fit, which is almost as large as the function values they compute. 2. The term "quality" for the value of the fitted function gives the impression that some biological significance is associated with values of the fitted function strictly between 0 and 1, but there is no justification for this kind of interpretation and finding the point where the fit achieves its maximum does not make sense. 3. By neglecting the error margin the authors try to optimize the fitted function using differences in the second, third, fourth, and even fifth decimal place which have no statistical significance. 4. Even if such a fit could profit from more data points, the authors should first prove that the region of interest has some kind of smoothness, that is, that a continuous fit makes any sense at all. 5. "Simulated molecular evolution" is a misnomer. We are dealing here with random search. Since the margin of error is so large, the fitted function does not provide statistically significant information about the points in search space where strings with cleavage sites could be found. This implies that the method is a highly unreliable stochastic search in the space of strings, even if the neural network is capable of learning some simple correlations. 6. Classical statistical methods are for these kind of problems with so few data points clearly superior to the neural networks used as a "black box" by the authors, which in the way they are structured provide a model with an error margin as large as the numbers being computed.7. And finally, even if someone would provide us with a function which separates strings with cleavage sites from strings without them perfectly, so-called simulated molecular evolution would not be better than random selection.Since a perfect fit would only produce exactly ones or zeros,starting a search in a region of space where all strings in the neighborhood get the value zero would not provide any kind of directional information for new iterations. We would just skip from one point to the other in a typical random walk manner.  相似文献   

13.
Here we describe and evaluate a new method for quantifying long bone curvature using geometric morphometric and semi‐landmark analysis of the human femur. The technique is compared with traditional ways of measuring subtense and point of maximum curvature using either coordinate calipers or projection onto graph paper. Of the traditional methods the graph paper method is more reliable than using coordinate calipers. Measurement error is consistently lower for measuring point of maximum curvature than for measuring subtense. The results warrant caution when comparing data collected by the different traditional methods. Landmark data collection proves reliable and has a low measurement error. However, measurement error increases with the number of semi‐landmarks included in the analysis of curvature. Measurements of subtense can be estimated more reliably using 3D landmarks along the curve than using traditional techniques. We use equidistant semi‐landmarks to quantify the curve because sliding the semi‐landmarks masks the curvature signal. Principal components analysis of these equidistant semi‐landmarks provides the added benefit of describing the shape of the curve. These results are promising for functional and forensic analysis of long bone curvature in modern human populations and in the fossil record. Am J Phys Anthropol, 2010. © 2010 Wiley‐Liss, Inc.  相似文献   

14.
Chinese hamster ovary (CHO) cells are the most popular mammalian cell factories for the production of glycosylated biopharmaceuticals. To further increase titer and productivity and ensure product quality, rational system-level engineering strategies based on constraint-based metabolic modeling, such as flux balance analysis (FBA), have gained strong interest. However, the quality of FBA predictions depends on the accuracy of the experimental input data, especially on the exchange rates of extracellular metabolites. Yet, it is not standard practice to devote sufficient attention to the accurate determination of these rates. In this work, we investigated to what degree the sampling frequency during a batch culture and the measurement errors of metabolite concentrations influence the accuracy of the calculated exchange rates and further, how this error then propagates into FBA predictions of growth rates. We determined that accurate measurements of essential amino acids with low uptake rates are crucial for the accuracy of FBA predictions, followed by a sufficient number of analyzed time points. We observed that the measured difference in growth rates of two cell lines can only be reliably predicted when both high measurement accuracy and sampling frequency are ensured.  相似文献   

15.
Obtaining accurate kinematic data of animals is essential for many biological studies and bio-inspired engineering. Many animals, however, are either too large or too delicate to transport to controlled environments where accurate kinematic data can be easily obtained. Often, in situ recordings are the only means available but are often subject to multi-axis motion and relative magnification changes with time leading to large discrepancies in the animal kinematics. Techniques to compensate for these artifacts were applied to a large jellyfish, Cyanea capillata, freely swimming in ocean waters. The bell kinematics were captured by digitizing exumbrella profiles for two full swimming cycles. Magnification was accounted for by tracking a reference point on the ocean floor and by observing the C. capillata exumbrella arclength in order to have a constant scale through the swimming cycles. A linear fit of the top bell section was used to find the body angle with respect to the camera coordinate system. Bell margin trajectories over two swimming cycles confirmed the accuracy of the correction techniques. The corrected profiles were filtered and interpolated to provide a set of time-dependent points along the bell. Discrete models of the exumbrella were used to analyze the bell kinematics. Exumbrella discretization was conducted using three different methods. Fourier series were fitted to the discretized models and subsequently used to analyze the bell kinematics of the C. capillata. The analysis showed that the bell did not deform uniformly over time with different segments lagging behind each other. Looping of the bell trajectory between contraction and relaxation was also present through most of the exumbrella. The bell margin had the largest looping with an outer path during contraction and inner path during relaxation. The subumbrella volume was approximated based on the exumbrella kinematics and was found to increase during contraction.  相似文献   

16.
ABSTRACT: BACKGROUND: Short-read data from next-generation sequencing technologies are now being generated across a range of research projects. The fidelity of this data can be affected by several factors and it is important to have simple and reliable approaches for monitoring it at the level of individual experiments. RESULTS: We developed a fast, scalable and accurate approach to estimating error rates in short reads, which has the added advantage of not requiring a reference genome. We build on the fundamental observation that there is a linear relationship between the copy number for a given read and the number of erroneous reads that differ from the read of interest by one or two bases. The slope of this relationship can be transformed to give an estimate of the error rate, both by read and by position. We present simulation studies as well as analyses of real data sets illustrating the precision and accuracy of this method, and we show that it is more accurate than alternatives that count the difference between the sample of interest and the reference genome. We show how this methodology led to the detection of mutations in the genome of the PhiX strain used for calibration of Illumina data. The proposed method is implemented in an R package, which can be downloaded from http://bcb.dfci.harvard.edu/~vwang/shadowRegression.html, and will be submitted to Bioconductor upon publication of this article. CONCLUSIONS: The proposed method can be used to monitor the quality of sequencing pipelines at the level of individual experiments without the use of reference genomes. Furthermore, having an estimate of the error rates gives one the opportunity to improve analyses and inferences in many applications of next-generation sequencing data.  相似文献   

17.
用非线性模型估测恒温和变温下棉铃虫蛹的发育率   总被引:4,自引:3,他引:1  
为了深入分析和探讨昆虫发育与环境温度的关系, 在恒温(15~37℃)和交替变温(12/18~34/40℃)下测定了棉铃虫Helicoverpa armigera蛹的发育历期(d),分别用线性模型和非线性模型(Logan模型﹑Lactin模型和王氏模型)拟合其发育率(1/d)数据。结果表明,这3个非线性模型能更准确地描述发育率与温度之间的曲线关系,判定系数(R2)在0.9878~0.9991之间。对全部观测数据的进一步研究表明,只要有6个分布合适的观测数据,就可以用这些非线性模型获得相当满意的估测效果。如果缺乏高温下的测定数据,用非线性模型预测的昆虫发育率可能失真。分析了蛹在恒温和变温下发育率差异的可能原因,讨论了应用这3个非线性模型预测蛹期发育的优点和缺点,指出用非线性模型取代线性日·度模型进行害虫发生预测和益虫饲养管理的合理性和必要性。  相似文献   

18.
The central nervous system regulates recruitment and firing of motor units to modulate muscle tension. Estimation of the firing rate time series is typically performed by decomposing the electromyogram (EMG) into its constituent firing times, then lowpass filtering a constituent train of impulses. Little research has examined the performance of different estimation methods, particularly in the inevitable presence of decomposition errors. The study of electrocardiogram (ECG) and electroneurogram (ENG) firing rate time series presents a similar problem, and has applied novel simulation models and firing rate estimators. Herein, we adapted an ENG/ECG simulation model to generate realistic EMG firing times derived from known rates, and assessed various firing rate time series estimation methods. ENG/ECG-inspired rate estimation worked exceptionally well when EMG decomposition errors were absent, but degraded unacceptably with decomposition error rates of ⩾1%. Typical EMG decomposition error rates—even after expert manual review—are 3–5%. At realistic decomposition error rates, more traditional EMG smoothing approaches performed best, when optimal smoothing window durations were selected. This optimal window was often longer than the 400 ms duration that is commonly used in the literature. The optimal duration decreased as the modulation frequency of firing rate increased, average firing rate increased and decomposition errors decreased. Examples of these rate estimation methods on physiologic data are also provided, demonstrating their influence on measures computed from the firing rate estimate.  相似文献   

19.
Large-scale digitization of museum specimens, particularly of insect collections, is becoming commonplace. Imaging increases the accessibility of collections and decreases the need to handle individual, often fragile, specimens. Another potential advantage of digitization is to make it easier to conduct morphometric analyses, but the accuracy of such methods needs to be tested. Here we compare morphometric measurements of scanned images of dragonfly wings to those obtained using other, more traditional, methods. We assume that the destructive method of removing and slide-mounting wings provides the most accurate method of measurement because it eliminates error due to wing curvature. We show that, for dragonfly wings, hand measurements of pinned specimens and digital measurements of scanned images are equally accurate relative to slide-mounted hand measurements. Since destructive slide-mounting is unsuitable for museum collections, and there is a risk of damage when hand measuring fragile pinned specimens, we suggest that the use of scanned images may also be an appropriate method to collect morphometric data from other collected insect species.  相似文献   

20.
Muscle paths in musculoskeletal models have been modeled using several different methods; however, deformation of soft tissue with changes in posture is rarely accounted for, and often only the neutral posture is used to define a muscle path. The objective of this study was to model curved muscle paths in the cervical spine that take into consideration soft tissue deformation with changes in neck posture. Two subject-specific models were created from magnetic resonance images (MRI) in 5 different sagittal plane neck postures. Curved paths of flexor and extensor muscles were modeled using piecewise linear lines-of-action in two ways; (1) using fixed via points determined from muscle paths in the neutral posture and (2) using moving muscle points that moved relative to the bones determined from muscle paths in all 5 postures. Accuracy of each curved modeled muscle path was evaluated by an error metric, the distance from the anatomic (centroid) muscle path determined from the MRI. Error metric was compared among three modeled muscle path types (straight, fixed via and moving muscle point) using a repeated measures one-way ANOVA (α=0.05). Moving muscle point paths had 21% lower error metric than fixed via point paths over all 15 pairs of neck muscles examined over 5 postures (3.86 mm vs. 4.88 mm). This study highlights the importance of defining muscle paths in multiple postures in order to properly define the changing curvature of a muscle path due to soft tissue deformation with posture.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号