首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Measuring rates of spread during biological invasions is important for predicting where and when invading organisms will spread in the future as well as for quantifying the influence of environmental conditions on invasion speed. While several methods have been proposed in the literature to measure spread rates, a comprehensive comparison of their accuracy when applied to empirical data would be problematic because true rates of spread are never known. This study compares the performances of several spread rate measurement methods using a set of simulated invasions with known theoretical spread rates over a hypothetical region where a set of sampling points are distributed. We vary the density and distribution (aggregative, random, and regular) of the sampling points as well as the shape of the invaded area and then compare how different spread rate measurement methods accommodate these varying conditions. We find that the method of regressing distance to the point of origin of the invasion as a function of time of first detection provides the most reliable method over adverse conditions (low sampling density, aggregated distribution of sampling points, irregular invaded area). The boundary displacement method appears to be a useful complementary method when sampling density is sufficiently high, as it provides an instantaneous measure of spread rate, and does not require long time series of data.  相似文献   

2.
A diagnostic cut‐off point of a biomarker measurement is needed for classifying a random subject to be either diseased or healthy. However, the cut‐off point is usually unknown and needs to be estimated by some optimization criteria. One important criterion is the Youden index, which has been widely adopted in practice. The Youden index, which is defined as the maximum of (sensitivity + specificity ?1), directly measures the largest total diagnostic accuracy a biomarker can achieve. Therefore, it is desirable to estimate the optimal cut‐off point associated with the Youden index. Sometimes, taking the actual measurements of a biomarker is very difficult and expensive, while ranking them without the actual measurement can be relatively easy. In such cases, ranked set sampling can give more precise estimation than simple random sampling, as ranked set samples are more likely to span the full range of the population. In this study, kernel density estimation is utilized to numerically solve for an estimate of the optimal cut‐off point. The asymptotic distributions of the kernel estimators based on two sampling schemes are derived analytically and we prove that the estimators based on ranked set sampling are relatively more efficient than that of simple random sampling and both estimators are asymptotically unbiased. Furthermore, the asymptotic confidence intervals are derived. Intensive simulations are carried out to compare the proposed method using ranked set sampling with simple random sampling, with the proposed method outperforming simple random sampling in all cases. A real data set is analyzed for illustrating the proposed method.  相似文献   

3.
Consider a sample of animal abundances collected from one sampling occasion. Our focus is in estimating the number of species in a closed population. In order to conduct a noninformative Bayesian inference when modeling this data, we derive Jeffreys and reference priors from the full likelihood. We assume that the species' abundances are randomly distributed according to a distribution indexed by a finite‐dimensional parameter. We consider two specific cases which assume that the mean abundances are constant or exponentially distributed. The Jeffreys and reference priors are functions of the Fisher information for the model parameters; the information is calculated in part using the linear difference score for integer parameter models (Lindsay & Roeder 1987). The Jeffreys and reference priors perform similarly in a data example we consider. The posteriors based on the Jeffreys and reference priors are proper. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

4.
The goal of this study is to explore the potential of computational growth models to predict bone density profiles in the proximal tibia in response to gait-induced loading. From a modeling point of view, we design a finite element-based computational algorithm using the theory of open system thermodynamics. In this algorithm, the biological problem, the balance of mass, is solved locally on the integration point level, while the mechanical problem, the balance of linear momentum, is solved globally on the node point level. Specifically, the local bone mineral density is treated as an internal variable, which is allowed to change in response to mechanical loading. From an experimental point of view, we perform a subject-specific gait analysis to identify the relevant forces during walking using an inverse dynamics approach. These forces are directly applied as loads in the finite element simulation. To validate the model, we take a Dual-Energy X-ray Absorptiometry scan of the subject’s right knee from which we create a geometric model of the proximal tibia. For qualitative validation, we compare the computationally predicted density profiles to the bone mineral density extracted from this scan. For quantitative validation, we adopt the region of interest method and determine the density values at fourteen discrete locations using standard and custom-designed image analysis tools. Qualitatively, our two- and three-dimensional density predictions are in excellent agreement with the experimental measurements. Quantitatively, errors are less than 3% for the two-dimensional analysis and less than 10% for the three-dimensional analysis. The proposed approach has the potential to ultimately improve the long-term success of possible treatment options for chronic diseases such as osteoarthritis on a patient-specific basis by accurately addressing the complex interactions between ambulatory loads and tissue changes.  相似文献   

5.
We present an extensive investigation of the accuracy and precision of temporal image correlation spectroscopy (TICS). Using simulations of laser scanning microscopy image time series, we investigate the effect of spatiotemporal sampling, particle density, noise, sampling frequency, and photobleaching of fluorophores on the recovery of transport coefficients and number densities by TICS. We show that the recovery of transport coefficients is usually limited by spatial sampling, while the measurement of accurate number densities is restricted by background noise in an image series. We also demonstrate that photobleaching of the fluorophore causes a consistent overestimation of diffusion coefficients and flow rates, and a severe underestimation of number densities. We derive a bleaching correction equation that removes both of these biases when used to fit temporal autocorrelation functions, without increasing the number of fit parameters. Finally, we image the basal membrane of a CHO cell with EGFP/alpha-actinin, using two-photon microscopy, and analyze a subregion of this series using TICS and apply the bleaching correction. We show that the photobleaching correction can be determined simply by using the average image intensities from the time series, and we use the simulations to provide good estimates of the accuracy and precision of the number density and transport coefficients measured with TICS.  相似文献   

6.
Stockmarr A 《Biometrics》1999,55(3):671-677
A crime has been committed, and a DNA profile of the perpetrator is obtained from the crime scene. A suspect with a matching profile is found. The problem of evaluating this DNA evidence in a forensic context, when the suspect is found through a database search, is analysed through a likelihood approach. The recommendations of the National Research Council of the U.S. are derived in this setting as the proper way of evaluating the evidence when finiteness of the population of possible perpetrators is not taken into account. When a finite population of possible perpetrators may be assumed, it is possible to take account of the sampling process that resulted in the actual database, so one can deal with the problem where a large proportion of the possible perpetrators belongs to the database in question. It is shown that the last approach does not in general result in a greater weight being assigned to the evidence, though it does when a sufficiently large amount of the possible perpetrators are in the database. The value of the likelihood ratio corresponding to the probable cause setting constitutes an upper bound for this weight, and the upper bound is only attained when all but one of the possible perpetrators are in the database.  相似文献   

7.
A method was required for determining the effect of management on extensive populations of trees and shrubs in central Australian rangelands. One useful indicator of change in these populations is the density of individuals, and there are several methods available based on distance measurement for density estimation. This study compared those procedures. Samples were drawn by computer from ground maps of actual plant distributions for Acacia aneura, Cassia nemophila and Atalaya hemiglauca and from a map generated at random. These samples were drawn to examine the properties of the nearest neighbour, point centred quarter, conditioned distance and compound T-square estimates of density. Samples were drawn by two methods: simple random sampling and semisystematic sampling. In general, there was a tendency for all estimators of density to underestimate the true density of naturally occurring populations with the compound T-square method (Byth 1982) being most robust. The compound T-square method was least biased but its variance increased for more aggregated spatial distributions. Estimates of density were not altered by the use of semisystematic sampling, when compared to simple random sampling. The spatial distributions examined in this study have not previously been studied as theoretical models. Acacia aneura and Cassia nemophila showed some aggregation of clusters, while the Atalaya hemiglauca showed a more extreme form of clustering due to its root suckering propagation.  相似文献   

8.
There has always been a problem with the collection of data and interpretation of the results obtained from any biological fluid in which the solids content was increased to a great extent. Of these solids, the triglycerides of the lipids may cause a plasma (serum) to vary in appearance from opalescent to milky. This condition of the specimen and the concomitant turbidity upon its addition to reagents creates the well-documented optical aberrations of spectrophotometric measurements. In addition, the lipids, in conjunction with the proteins, can act as diluents when they are elevated, thereby decreasing what might be termed the residual true plasma volume. Thus the water content of an aliquot sampled for a particular analytical procedure is diminished, and by that means a situation is created in which a short sample is drawn. This dilution effect by the solids results in a lowering of the assay values obtained for the measured constituents of such a serum sample. An associated phenomenon of high concentrations of solids, especially proteins, is the increase in viscosity of a specimen, a condition that also causes an error of short sampling when certain peristaltic pumping devices are used. This review considers several aspects of problems encountered when dealing with a number of circumstances that are critical to the measurement of analytes in severely hyperlipemic and/or hyperproteinemic specimens. These include the problems of short sampling; the potential amelioration of the problem by corrective mathematics, extraction of the lipids, or ultracentrifugation of the true plasma from the lipids; the important need to include most analytes into our considerations; the difference in reference base values for the calculation of concentrations of lipids of serum versus other analytes; the concept of the use of ratios when the reference base values differ, numerator analyte from denominator analyte; and the problems of using serum blanks when necessary corrective action for the solids volume is neglected. Thus, in the final analysis, problems with underestimated volumes of samples used for many spectrophotometric determinations are considered here along with the other difficulties encountered when the need to measure analytes in serums with extremely high solids content presents.  相似文献   

9.
A method for automatic measurement of anatomical landmarks on the back surface is presented. The landmarks correspond to the verteba prominens, the dimples of the posterior superior iliac spines and the sacrum point (beginning of rima ani), which are characterized by distinct surface curvature. The surface curvatures are calculated from rasterstereographic surface measurements. The procedure of isolating a region of interest for each landmark (surface segmentation) and the calculation of the landmark coordinates are described in detail. The accuracy of landmark localization was tested with serial rasterstereographs of 28 patients (with moderate idiopathic scoliosis). From the results the intrinsic accuracy of the method is estimated to be little more than 1 mm (depending on the sampling density of the surface measurement). Therefore, the landmarks may well be used for the objective definition of a body-fixed reference coordinate system. The accuracy is, however, dependent on the specific landmark and a minor influence of posture variations is observed.  相似文献   

10.
We present toBeeView, a program that produces from a digital photograph, or a set of photographs, an approximation of the image formed at the sampling station stage in the eye of an animal. toBeeView is freely available from https://github.com/EEZA-CSIC/compound-eye-simulator . toBeeView assumes that sampling stations in the retina are distributed on a hexagonal grid. Each sampling station computes the weighted average of the color of the part of the visual scene projecting on its photoreceptors, and the hexagon of the output image associated with the sampling station is filled in this average color. Users can specify the visual angle subtended by the scene and the basic parameters determining the spatial resolution of the eye: photoreceptor spatial distribution and optic quality of the eye. The photoreceptor distribution is characterized by the vertical and horizontal interommatidial angles—which can vary along the retina. The optic quality depends on the section of the visual scene projecting onto each sampling station, determined by the acceptance angle. The output of toBeeView provides a first approximation to the amount of visual information available at the retina for subsequent processing, summarizing in an intuitive way the interaction between eye optics and receptor density. This tool can be used whenever it is important to determine the visual acuity of a species and will be particularly useful to study processes where object detection and identification is important, such as visual displays, camouflage, and mimicry.  相似文献   

11.
Skeletal fractures associated with bone mass loss are a major clinical problem and economic burden, and lead to significant morbidity and mortality in the ageing population. Clinical image-based measures of bone mass show only moderate correlative strength with bone strength. However, engineering models derived from clinical image data predict bone strength with significantly greater accuracy. Currently, image-based finite element (FE) models are time consuming to construct and are non-parametric. The goal of this study was to develop a parametric proximal femur FE model based on a statistical shape and density model (SSDM) derived from clinical image data. A small number of independent SSDM parameters described the shape and bone density distribution of a set of cadaver femurs and captured the variability affecting proximal femur FE strength predictions. Finally, a three-dimensional FE model of an 'unknown' femur was reconstructed from the SSDM with an average spatial error of 0.016 mm and an average bone density error of 0.037 g/cm(3).  相似文献   

12.
Species distribution models (SDMs) are now being widely used in ecology for management and conservation purposes across terrestrial, freshwater, and marine realms. The increasing interest in SDMs has drawn the attention of ecologists to spatial models and, in particular, to geostatistical models, which are used to associate observations of species occurrence or abundance with environmental covariates in a finite number of locations in order to predict where (and how much of) a species is likely to be present in unsampled locations. Standard geostatistical methodology assumes that the choice of sampling locations is independent of the values of the variable of interest. However, in natural environments, due to practical limitations related to time and financial constraints, this theoretical assumption is often violated. In fact, data commonly derive from opportunistic sampling (e.g., whale or bird watching), in which observers tend to look for a specific species in areas where they expect to find it. These are examples of what is referred to as preferential sampling, which can lead to biased predictions of the distribution of the species. The aim of this study is to discuss a SDM that addresses this problem and that it is more computationally efficient than existing MCMC methods. From a statistical point of view, we interpret the data as a marked point pattern, where the sampling locations form a point pattern and the measurements taken in those locations (i.e., species abundance or occurrence) are the associated marks. Inference and prediction of species distribution is performed using a Bayesian approach, and integrated nested Laplace approximation (INLA) methodology and software are used for model fitting to minimize the computational burden. We show that abundance is highly overestimated at low abundance locations when preferential sampling effects not accounted for, in both a simulated example and a practical application using fishery data. This highlights that ecologists should be aware of the potential bias resulting from preferential sampling and account for it in a model when a survey is based on non‐randomized and/or non‐systematic sampling.  相似文献   

13.
In this paper, we study Bayesian analysis of nonlinear hierarchical mixture models with a finite but unknown number of components. Our approach is based on Markov chain Monte Carlo (MCMC) methods. One of the applications of our method is directed to the clustering problem in gene expression analysis. From a mathematical and statistical point of view, we discuss the following topics: theoretical and practical convergence problems of the MCMC method; determination of the number of components in the mixture; and computational problems associated with likelihood calculations. In the existing literature, these problems have mainly been addressed in the linear case. One of the main contributions of this paper is developing a method for the nonlinear case. Our approach is based on a combination of methods including Gibbs sampling, random permutation sampling, birth-death MCMC, and Kullback-Leibler distance.  相似文献   

14.
The structure of images   总被引:66,自引:0,他引:66  
  相似文献   

15.
Although image data are almost universally acquired on rectangular sampling lattices, the regular hexagonal lattice offers important theoretical advantages for tessellation of images, particularly when subsequent processing involves operations on local image neighborhoods. The few systems capable of processing hexagonally tessellated images have approximated this tessellation by using image data acquired on a rectangular sampling lattice, from which six of the eight image samples were selected from each local neighborhood. This paper describes a simple method of directly acquiring image data in hexagonal image tessellations; the method is used to compare at constant sampling density the most common of these approximating image tessellations with both a nonregular and a regular hexagonal image tessellation. The test objects were human blood cells, from which features describing cellular geometry were extracted for each image tessellation. Compared to the approximating tessellation, the nonregular tessellation tended to decrease feature means and increase feature variances. In contrast, the regular tessellation tended to increase feature means and decrease feature variances. Consequently, the extracted features showed subtle but consistent differences, with decreasing anisotropic effects and data dispersion for the regular tessellation. In addition, cells contacting others near the 45 degree diagonals were more readily segmented when the image was tessellated on the regular lattice. Expected to be general, these trends recommend use of the regular tessellation, especially when classification accuracy may depend on small differences in several similar geometric features.  相似文献   

16.
The resolution in 3D reconstructions from tilt series is limited to the information below the first zero of the contrast transfer function unless the signal is corrected computationally. The restoration is usually based on the assumption of a linear space-invariant system and a linear relationship between object mass density and observed image contrast. The space-invariant model is no longer valid when applied to tilted micrographs because the defocus varies in a direction perpendicular to the tilt axis and with it the shape of the associated point spread function. In this paper, a method is presented for determining the defocus gradient in thin specimens such as sections and 2D crystals, and for restoration of the images subsequently used for 3D reconstruction. The alignment procedure for 3D reconstruction includes area matching and tilt geometry refinement. A map with limited resolution computed from uncorrected micrographs is compared to a volume computed from corrected micrographs with extended resolution.  相似文献   

17.
Computational biomechanical models are useful tools for supporting orthopedic implant design and surgical decision making, but because they are a simplification of the clinical scenario they must be carefully validated to ensure that they are still representative. The goal of this study was to assess the validity of the generation process of a structural finite element model of the proximal femur employing the digital image correlation (DIC) strain measurement technique. A finite element analysis model of the proximal femur subjected to gait loading was generated from a CT scan of an analog composite femur, and its predicted mechanical behavior was compared with an experimental model. Whereas previous studies have employed strain gauging to obtain discreet point data for validation, in this study DIC was used for full field quantified comparison of the predicted and experimentally measured strains. The strain predicted by the computational model was in good agreement with experimental measurements, with R(2) correlation values from 0.83 to 0.92 between the simulation and the tests. The sensitivity and repeatability of the strain measurements were comparable to or better than values reported in the literature for other DIC tests on tissue specimens. The experimental-model correlation was in the same range as values obtained from strain gauging, but the DIC technique produced more detailed, full field data and is potentially easier to use. As such, the findings supported the validity of the model generation process, giving greater confidence in the model's predictions, and digital image correlation was demonstrated as a useful tool for the validation of biomechanical models.  相似文献   

18.
Computational models are increasingly being used to investigate the mechanical properties of cardiac tissue. While much insight has been gained from these studies, one important limitation associated with computational modeling arises when using in vivo images of the heart to generate the reference state of the model. An unloaded reference configuration is needed to accurately represent the deformation of the heart. However, it is rare for a beating heart to actually reach a zero-pressure state during the cardiac cycle. To overcome this, a computational technique was adapted to determine the unloaded configuration of an in vivo porcine left ventricle (LV). In the current study, in vivo measurements were acquired using magnetic resonance images (MRI) and synchronous pressure catheterization in the LV (N = 5). The overall goal was to quantify the effects of using early–diastolic filling as the reference configuration (common assumption used in modeling) versus using the unloaded reference configuration for predicting the in vivo properties of LV myocardium. This was accomplished by using optimization to minimize the difference between MRI measured and finite element predicted strains and cavity volumes. The results show that when using the unloaded reference configuration, the computational method predicts material properties for LV myocardium that are softer and less anisotropic than when using the early-diastolic filling reference configuration. This indicates that the choice of reference configuration could have a significant impact on capturing the realistic mechanical response of the heart.  相似文献   

19.
Measurement of receptor distributions on cell surfaces is one important aspect of understanding the mechanism whereby receptors function. In recent years, scanning fluorescence correlation spectroscopy has emerged as an excellent tool for making quantitative measurements of cluster sizes and densities. However, the measurements are slow and usually require fixed preparations. Moreover, while the precision is good, the accuracy is limited by the relatively small amount of information in each measurement, such that many are required. Here we present a novel extension of the scanning correlation spectroscopy that solves a number of the present problems. The new technique, which we call image correlation spectroscopy, is based on quantitative analysis of confocal scanning laser microscopy images. Since these can be generated in a matter of a second or so, the measurements become more rapid. The image is collected over a large cell area so that more sampling is done, improving the accuracy. The sacrifice is a lower resolution in the sampling, which leads to a lower precision. This compromise of precision in favor of speed and accuracy still provides an enormous advantage for image correlation spectroscopy over scanning correlation spectroscopy. The present work demonstrates the underlying theory, showing how the principles can be applied to measurements on standard fluorescent beads and changes in distribution of receptors for platelet-derived growth factor on human foreskin fibroblasts.  相似文献   

20.
Analyzing time series gene expression data   总被引:7,自引:0,他引:7  
MOTIVATION: Time series expression experiments are an increasingly popular method for studying a wide range of biological systems. However, when analyzing these experiments researchers face many new computational challenges. Algorithms that are specifically designed for time series experiments are required so that we can take advantage of their unique features (such as the ability to infer causality from the temporal response pattern) and address the unique problems they raise (e.g. handling the different non-uniform sampling rates). RESULTS: We present a comprehensive review of the current research in time series expression data analysis. We divide the computational challenges into four analysis levels: experimental design, data analysis, pattern recognition and networks. For each of these levels, we discuss computational and biological problems at that level and point out some of the methods that have been proposed to deal with these issues. Many open problems in all these levels are discussed. This review is intended to serve as both, a point of reference for experimental biologists looking for practical solutions for analyzing their data, and a starting point for computer scientists interested in working on the computational problems related to time series expression analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号