首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ayres KL  Balding DJ 《Genetics》2001,157(1):413-423
We describe a Bayesian approach to analyzing multilocus genotype or haplotype data to assess departures from gametic (linkage) equilibrium. Our approach employs a Markov chain Monte Carlo (MCMC) algorithm to approximate the posterior probability distributions of disequilibrium parameters. The distributions are computed exactly in some simple settings. Among other advantages, posterior distributions can be presented visually, which allows the uncertainties in parameter estimates to be readily assessed. In addition, background knowledge can be incorporated, where available, to improve the precision of inferences. The method is illustrated by application to previously published datasets; implications for multilocus forensic match probabilities and for simple association-based gene mapping are also discussed.  相似文献   

2.

Background

Questions about the reliability of parametric standard errors (SEs) from nonlinear least squares (LS) algorithms have led to a general mistrust of these precision estimators that is often unwarranted.

Methods

The importance of non-Gaussian parameter distributions is illustrated by converting linear models to nonlinear by substituting eA, ln A, and 1/A for a linear parameter a. Monte Carlo (MC) simulations characterize parameter distributions in more complex cases, including when data have varying uncertainty and should be weighted, but weights are neglected. This situation leads to loss of precision and erroneous parametric SEs, as is illustrated for the Lineweaver-Burk analysis of enzyme kinetics data and the analysis of isothermal titration calorimetry data.

Results

Non-Gaussian parameter distributions are generally asymmetric and biased. However, when the parametric SE is < 10% of the magnitude of the parameter, both the bias and the asymmetry can usually be ignored. Sometimes nonlinear estimators can be redefined to give more normal distributions and better convergence properties.

Conclusion

Variable data uncertainty, or heteroscedasticity, can sometimes be handled by data transforms but more generally requires weighted LS, which in turn require knowledge of the data variance.

General significance

Parametric SEs are rigorously correct in linear LS under the usual assumptions, and are a trustworthy approximation in nonlinear LS provided they are sufficiently small — a condition favored by the abundant, precise data routinely collected in many modern instrumental methods.  相似文献   

3.
Magnetic resonance (MR) imaging is the most complex imaging technology available to clinicians. Whereas most imaging technologies depict differences in one, or occasionally two, tissue characteristics, MR imaging has five tissue variables—spin density, T1 and T2 relaxation times and flow and spectral shifts—from which to construct its images. These variables can be combined in various ways by selecting pulse sequences and pulse times to emphasize any desired combination of tissue characteristics in the image. This selection is determined by the user of the MR system before imaging data are collected. If the selection is not optimal, the imaging process must be repeated at a cost of time and resources.The optimal selection of MR imaging procedures and the proper interpretation of the resultant images require a thorough understanding of the basic principles of MR imaging. Included in this understanding should be at least the rudiments of how an MR imaging signal is produced and why it decays with time; the significance of relaxation constants; the principles of scanning methods such as saturation recovery, inversion recovery and spin echo; how data obtained by these methods are used to form an image, and how the imaging data are complied by multi-slice and volumetric processes. In selecting an MR imaging unit, information about different magnet designs (resistive, superconductive and permanent) is useful. Although no bioeffects are thought to be associated with an MR imaging examination, some knowledge of the attempts to identify bioeffects is helpful in alleviating concern in patients.  相似文献   

4.
High resolution strain measurements are of particular interest in load bearing tissues such as the intervertebral disc (IVD), permitting characterization of biomechanical conditions which could lead to injury and degenerative outcomes. Magnetic resonance (MR) imaging produces excellent image contrast in cartilaginous tissues, allowing for image-based strain determination. Nonrigid registration (NRR) of MR images has previously demonstrated sub-voxel registration accuracy although its accuracy and precision in determining strain has not been evaluated. Accuracy and precision of NRR-derived strain measurements were evaluated using computer generated deformations applied to both computer generated images and MR images. Two different measures of registration similarity--the cost function which drives the registration algorithm--were compared: Mutual Information (MI) and Least Squares (LS). Strain error was evaluated with respect to signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and strain heterogeneity. Additionally, the creep strain response from an in vitro loaded porcine IVD is shown and comparisons between similarity measures are presented. MI showed a decrease in strain precision with increasing CNR and decreasing SNR while LS was insensitive to both. Both similarity measures showed a decrease in strain precision with increasing strain heterogeneity. When computer generated heterogeneous strains were applied to MR images of the IVD, LS showed substantially lower strain error in comparison to MI. Results suggest that LS-driven NRR provides a more accurate image-based method for mapping large and heterogeneous strain fields and this method can be applied to studies of the IVD and, potentially, other soft tissues which present sufficient image texture.  相似文献   

5.
MultiSig is a newly developed mode of analysis of sedimentation equilibrium (SE) experiments in the analytical ultracentrifuge, having the capability of taking advantage of the remarkable precision (~0.1 % of signal) of the principal optical (fringe) system employed, thus supplanting existing methods of analysis through reducing the ‘noise’ level of certain important parameter estimates by up to orders of magnitude. Long-known limitations of the SE method, arising from lack of knowledge of the true fringe number in fringe optics and from the use of unstable numerical algorithms such as numerical differentiation, have been transcended. An approach to data analysis, akin to ‘spatial filtering’, has been developed, and shown by both simulation and practical application to be a powerful aid to the precision with which near-monodisperse systems can be analysed, potentially yielding information on protein-solvent interaction. For oligo- and poly-disperse systems the information returned includes precise average mass distributions over both cell radial and concentration ranges and mass-frequency histograms at fixed radial positions. The application of MultiSig analysis to various complex heterogenous systems and potentially multiply-interacting carbohydrate oligomers is described.  相似文献   

6.
Characterization of tissues like brain by using magnetic resonance (MR) images and colorization of the gray scale image has been reported in the literature, along with the advantages and drawbacks. Here, we present two independent methods; (i) a novel colorization method to underscore the variability in brain MR images, indicative of the underlying physical density of bio tissue, (ii) a segmentation method (both hard and soft segmentation) to characterize gray brain MR images. The segmented images are then transformed into color using the above-mentioned colorization method, yielding promising results for manual tracing. Our color transformation incorporates the voxel classification by matching the luminance of voxels of the source MR image and provided color image by measuring the distance between them. The segmentation method is based on single-phase clustering for 2D and 3D image segmentation with a new auto centroid selection method, which divides the image into three distinct regions (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) using prior anatomical knowledge). Results have been successfully validated on human T2-weighted (T2) brain MR images. The proposed method can be potentially applied to gray-scale images from other imaging modalities, in bringing out additional diagnostic tissue information contained in the colorized image processing approach as described.  相似文献   

7.
Parameter inference and model selection are very important for mathematical modeling in systems biology. Bayesian statistics can be used to conduct both parameter inference and model selection. Especially, the framework named approximate Bayesian computation is often used for parameter inference and model selection in systems biology. However, Monte Carlo methods needs to be used to compute Bayesian posterior distributions. In addition, the posterior distributions of parameters are sometimes almost uniform or very similar to their prior distributions. In such cases, it is difficult to choose one specific value of parameter with high credibility as the representative value of the distribution. To overcome the problems, we introduced one of the population Monte Carlo algorithms, population annealing. Although population annealing is usually used in statistical mechanics, we showed that population annealing can be used to compute Bayesian posterior distributions in the approximate Bayesian computation framework. To deal with un-identifiability of the representative values of parameters, we proposed to run the simulations with the parameter ensemble sampled from the posterior distribution, named “posterior parameter ensemble”. We showed that population annealing is an efficient and convenient algorithm to generate posterior parameter ensemble. We also showed that the simulations with the posterior parameter ensemble can, not only reproduce the data used for parameter inference, but also capture and predict the data which was not used for parameter inference. Lastly, we introduced the marginal likelihood in the approximate Bayesian computation framework for Bayesian model selection. We showed that population annealing enables us to compute the marginal likelihood in the approximate Bayesian computation framework and conduct model selection depending on the Bayes factor.  相似文献   

8.
The modelling of biochemical networks becomes delicate if kinetic parameters are varying, uncertain or unknown. Facing this situation, we quantify uncertain knowledge or beliefs about parameters by probability distributions. We show how parameter distributions can be used to infer probabilistic statements about dynamic network properties, such as steady-state fluxes and concentrations, signal characteristics or control coefficients. The parameter distributions can also serve as priors in Bayesian statistical analysis. We propose a graphical scheme, the 'dependence graph', to bring out known dependencies between parameters, for instance, due to the equilibrium constants. If a parameter distribution is narrow, the resulting distribution of the variables can be computed by expanding them around a set of mean parameter values. We compute the distributions of concentrations, fluxes and probabilities for qualitative variables such as flux directions. The probabilistic framework allows the study of metabolic correlations, and it provides simple measures of variability and stochastic sensitivity. It also shows clearly how the variability of biological systems is related to the metabolic response coefficients.  相似文献   

9.
Analytical ultracentrifugation has reemerged as a widely used tool for the study of ensembles of biological macromolecules to understand, for example, their size-distribution and interactions in free solution. Such information can be obtained from the mathematical analysis of the concentration and signal gradients across the solution column and their evolution in time generated as a result of the gravitational force. In sedimentation velocity analytical ultracentrifugation, this analysis is frequently conducted using high resolution, diffusion-deconvoluted sedimentation coefficient distributions. They are based on Fredholm integral equations, which are ill-posed unless stabilized by regularization. In many fields, maximum entropy and Tikhonov-Phillips regularization are well-established and powerful approaches that calculate the most parsimonious distribution consistent with the data and prior knowledge, in accordance with Occam's razor. In the implementations available in analytical ultracentrifugation, to date, the basic assumption implied is that all sedimentation coefficients are equally likely and that the information retrieved should be condensed to the least amount possible. Frequently, however, more detailed distributions would be warranted by specific detailed prior knowledge on the macromolecular ensemble under study, such as the expectation of the sample to be monodisperse or paucidisperse or the expectation for the migration to establish a bimodal sedimentation pattern based on Gilbert-Jenkins' theory for the migration of chemically reacting systems. So far, such prior knowledge has remained largely unused in the calculation of the sedimentation coefficient or molecular weight distributions or was only applied as constraints. In the present paper, we examine how prior expectations can be built directly into the computational data analysis, conservatively in a way that honors the complete information of the experimental data, whether or not consistent with the prior expectation. Consistent with analogous results in other fields, we find that the use of available prior knowledge can have a dramatic effect on the resulting molecular weight, sedimentation coefficient, and size-and-shape distributions and can significantly increase both their sensitivity and their resolution. Further, the use of multiple alternative prior information allows us to probe the range of possible interpretations consistent with the data.  相似文献   

10.
Mathematical modeling is now frequently used in outbreak investigations to understand underlying mechanisms of infectious disease dynamics, assess patterns in epidemiological data, and forecast the trajectory of epidemics. However, the successful application of mathematical models to guide public health interventions lies in the ability to reliably estimate model parameters and their corresponding uncertainty. Here, we present and illustrate a simple computational method for assessing parameter identifiability in compartmental epidemic models. We describe a parametric bootstrap approach to generate simulated data from dynamical systems to quantify parameter uncertainty and identifiability. We calculate confidence intervals and mean squared error of estimated parameter distributions to assess parameter identifiability. To demonstrate this approach, we begin with a low-complexity SEIR model and work through examples of increasingly more complex compartmental models that correspond with applications to pandemic influenza, Ebola, and Zika. Overall, parameter identifiability issues are more likely to arise with more complex models (based on number of equations/states and parameters). As the number of parameters being jointly estimated increases, the uncertainty surrounding estimated parameters tends to increase, on average, as well. We found that, in most cases, R0 is often robust to parameter identifiability issues affecting individual parameters in the model. Despite large confidence intervals and higher mean squared error of other individual model parameters, R0 can still be estimated with precision and accuracy. Because public health policies can be influenced by results of mathematical modeling studies, it is important to conduct parameter identifiability analyses prior to fitting the models to available data and to report parameter estimates with quantified uncertainty. The method described is helpful in these regards and enhances the essential toolkit for conducting model-based inferences using compartmental dynamic models.  相似文献   

11.
《IRBM》2014,35(4):202-213
Speckle has been widely considered a noisy feature in ultrasound images, thus it is intended to be suppressed and eliminated. On the other hand, speckle can be studied as a signal modeled by various statistical distributions or by analyzing its intensity with spatial relations in image space that characterize its nature, and hence, the nature of the underlying tissue. This knowledge can then be used in order to classify the different speckle regions into anatomical structures. In fact, speckle characterization in echocardiography and other ultrasonic images is important for motion tracking, tissue characterization, image segmentation, registration, and other medical applications for diagnosis, therapy planning and decision making. In this paper, we review and discuss various speckle characterization methods, which are often applied to confirm the speckle nature of the elements.  相似文献   

12.

Background

Translating a known metabolic network into a dynamic model requires reasonable guesses of all enzyme parameters. In Bayesian parameter estimation, model parameters are described by a posterior probability distribution, which scores the potential parameter sets, showing how well each of them agrees with the data and with the prior assumptions made.

Results

We compute posterior distributions of kinetic parameters within a Bayesian framework, based on integration of kinetic, thermodynamic, metabolic, and proteomic data. The structure of the metabolic system (i.e., stoichiometries and enzyme regulation) needs to be known, and the reactions are modelled by convenience kinetics with thermodynamically independent parameters. The parameter posterior is computed in two separate steps: a first posterior summarises the available data on enzyme kinetic parameters; an improved second posterior is obtained by integrating metabolic fluxes, concentrations, and enzyme concentrations for one or more steady states. The data can be heterogenous, incomplete, and uncertain, and the posterior is approximated by a multivariate log-normal distribution. We apply the method to a model of the threonine synthesis pathway: the integration of metabolic data has little effect on the marginal posterior distributions of individual model parameters. Nevertheless, it leads to strong correlations between the parameters in the joint posterior distribution, which greatly improve the model predictions by the following Monte-Carlo simulations.

Conclusion

We present a standardised method to translate metabolic networks into dynamic models. To determine the model parameters, evidence from various experimental data is combined and weighted using Bayesian parameter estimation. The resulting posterior parameter distribution describes a statistical ensemble of parameter sets; the parameter variances and correlations can account for missing knowledge, measurement uncertainties, or biological variability. The posterior distribution can be used to sample model instances and to obtain probabilistic statements about the model's dynamic behaviour.  相似文献   

13.
Sparse MRI has been introduced to reduce the acquisition time and raw data size by undersampling the k-space data. However, the image quality, particularly the contrast to noise ratio (CNR), decreases with the undersampling rate. In this work, we proposed an interpolated Compressed Sensing (iCS) method to further enhance the imaging speed or reduce data size without significant sacrifice of image quality and CNR for multi-slice two-dimensional sparse MR imaging in humans. This method utilizes the k-space data of the neighboring slice in the multi-slice acquisition. The missing k-space data of a highly undersampled slice are estimated by using the raw data of its neighboring slice multiplied by a weighting function generated from low resolution full k-space reference images. In-vivo MR imaging in human feet has been used to investigate the feasibility and the performance of the proposed iCS method. The results show that by using the proposed iCS reconstruction method, the average image error can be reduced and the average CNR can be improved, compared with the conventional sparse MRI reconstruction at the same undersampling rate.  相似文献   

14.
Image denoising has a profound impact on the precision of estimated parameters in diffusion kurtosis imaging (DKI). This work first proposes an approach to constructing a DKI phantom that can be used to evaluate the performance of denoising algorithms in regard to their abilities of improving the reliability of DKI parameter estimation. The phantom was constructed from a real DKI dataset of a human brain, and the pipeline used to construct the phantom consists of diffusion-weighted (DW) image filtering, diffusion and kurtosis tensor regularization, and DW image reconstruction. The phantom preserves the image structure while minimizing image noise, and thus can be used as ground truth in the evaluation. Second, we used the phantom to evaluate three representative algorithms of non-local means (NLM). Results showed that one scheme of vector-based NLM, which uses DWI data with redundant information acquired at different b-values, produced the most reliable estimation of DKI parameters in terms of Mean Square Error (MSE), Bias and standard deviation (Std). The result of the comparison based on the phantom was consistent with those based on real datasets.  相似文献   

15.
Global analysis of fluorescence lifetime imaging microscopy data   总被引:6,自引:0,他引:6       下载免费PDF全文
Global analysis techniques are described for frequency domain fluorescence lifetime imaging microscopy (FLIM) data. These algorithms exploit the prior knowledge that only a limited number of fluorescent molecule species whose lifetimes do not vary spatially are present in the sample. Two approaches to implementing the lifetime invariance constraint are described. In the lifetime invariant fit method, each image in the lifetime image sequence is spatially averaged to obtain an improved signal-to-noise ratio. The lifetime estimations from these averaged data are used to recover the fractional contribution to the steady-state fluorescence on a pixel-by-pixel basis for each species. The second, superior, approach uses a global analysis technique that simultaneously fits the fractional contributions in all pixels and the spatially invariant lifetimes. In frequency domain FLIM the maximum number of lifetimes that can be fit with the global analysis method is twice the number of lifetimes that can be fit with conventional approaches. As a result, it is possible to discern two lifetimes with a single-frequency FLIM setup. The algorithms were tested on simulated data and then applied to separate the cellular distributions of coexpressed green fluorescent proteins in living cells.  相似文献   

16.
Digital image-based cytometry of clinical specimens labeled with fluorescent, disease-specific markers holds promise for becoming an important diagnostic and prognostic technique because the technique can make a diverse range of quantitative biochemical, morphologic, densitometric and contextual measurements on intact specimens. It has been previously shown by us, using an image cytometer (IC) consisting entirely of commercially available components, that the nuclei of individual cells in slide-supported specimens can be detected automatically using a fluorescent DNA stain and image analysis software. The purpose of this study was to determine the precision of the IC for quantifying the integrated fluorescence intensity and area of fluorescent standard beads and nuclei. Integrated intensities could be quantified to between 2.3% and 3.5% precision using a 40x objective lens and between 1.6% and 2.3% using a 20x objective. The main contribution to this uncertainty was 2% inaccuracy in determining the variations in sensitivity over the imaging area. Areas could be quantified to between 0.91% and 2.1% using a 40x objective and between 2.8% and 3.2% using a 20x objective. Significant quantification errors were introduced if the objects were not in focus or were touching each other. Overall, however, these results demonstrated that image cytometry of fluorescence-stained specimens can yield quantitative results with sufficient precision for determining DNA ploidy distributions and for making other measurements on clinical specimens.  相似文献   

17.
Clustering is an important research area that has practical applications in many fields. Fuzzy clustering has shown advantages over crisp and probabilistic clustering, especially when there are significant overlaps between clusters. Most analytic fuzzy clustering approaches are derived from Bezdek's fuzzy c-means algorithm. One major factor that influences the determination of appropriate clusters in these approaches is an exponent parameter, called the fuzzifier. To our knowledge, no theoretical reason leading to an optimal setting of this parameter is available. This paper presents the development of an heuristic scheme for determining the fuzzifier. This scheme creates close interactions between the fuzzifier and the data set to be clustered. Experimental results in clustering IRIS data and in code book design required for image compression reveal a good performance of our proposal.  相似文献   

18.
With the development of medical imaging modalities and image processing algorithms, there arises a need for methods of their comprehensive quantitative evaluation. In particular, this concerns the algorithms for vessel tracking and segmentation in magnetic resonance angiography images. The problem can be approached by using synthetic images, where true geometry of vessels is known. This paper presents a framework for computer modeling of MRA imaging and the results of its validation. A new model incorporates blood flow simulation within MR signal computation kernel. The proposed solution is unique, especially with respect to the interface between flow and image formation processes. Furthermore it utilizes the concept of particle tracing. The particles reflect the flow of fluid they are immersed in and they are assigned magnetization vectors with temporal evolution controlled by MR physics. Such an approach ensures flexibility as the designed simulator is able to reconstruct flow profiles of any type. The proposed model is validated in a series of experiments with physical and digital flow phantoms. The synthesized 3D images contain various features (including artifacts) characteristic for the time-of-flight protocol and exhibit remarkable correlation with the data acquired in a real MR scanner. The obtained results support the primary goal of the conducted research, i.e. establishing a reference technique for a quantified validation of MR angiography image processing algorithms.  相似文献   

19.
In susceptibility-weighted imaging (SWI), the high resolution required to obtain a proper contrast generation leads to a reduced signal-to-noise ratio (SNR). The application of a denoising filter to produce images with higher SNR and still preserve small structures from excessive blurring is therefore extremely desirable. However, as the distributions of magnitude and phase noise may introduce biases during image restoration, the application of a denoising filter is non-trivial. Taking advantage of the potential multispectral nature of MR images, a multicomponent approach using a Non-Local Means (MNLM) denoising filter may perform better than a component-by-component image restoration method. Here we present a new MNLM-based method (Multicomponent-Imaginary-Real-SWI, hereafter MIR-SWI) to produce SWI images with high SNR and improved conspicuity. Both qualitative and quantitative comparisons of MIR-SWI with the original SWI scheme and previously proposed SWI restoring pipelines showed that MIR-SWI fared consistently better than the other approaches. Noise removal with MIR-SWI also provided improvement in contrast-to-noise ratio (CNR) and vessel conspicuity at higher factors of phase mask multiplications than the one suggested in the literature for SWI vessel imaging. We conclude that a proper handling of noise in the complex MR dataset may lead to improved image quality for SWI data.  相似文献   

20.
In 2001, Krueger and Glover introduced a model describing the temporal SNR (tSNR) of an EPI time series as a function of image SNR (SNR0). This model has been used to study physiological noise in fMRI, to optimize fMRI acquisition parameters, and to estimate maximum attainable tSNR for a given set of MR image acquisition and processing parameters. In its current form, this noise model requires the accurate estimation of image SNR. For multi-channel receiver coils, this is not straightforward because it requires export and reconstruction of large amounts of k-space raw data and detailed, custom-made image reconstruction methods. Here we present a simple extension to the model that allows characterization of the temporal noise properties of EPI time series acquired with multi-channel receiver coils, and reconstructed with standard root-sum-of-squares combination, without the need for raw data or custom-made image reconstruction. The proposed extended model includes an additional parameter κ which reflects the impact of noise correlations between receiver channels on the data and scales an apparent image SNR (SNR′0) measured directly from root-sum-of-squares reconstructed magnitude images so that κ = SNR′0/SNR0 (under the condition of SNR0>50 and number of channels ≤32). Using Monte Carlo simulations we show that the extended model parameters can be estimated with high accuracy. The estimation of the parameter κ was validated using an independent measure of the actual SNR0 for non-accelerated phantom data acquired at 3T with a 32-channel receiver coil. We also demonstrate that compared to the original model the extended model results in an improved fit to human task-free non-accelerated fMRI data acquired at 7T with a 24-channel receiver coil. In particular, the extended model improves the prediction of low to medium tSNR values and so can play an important role in the optimization of high-resolution fMRI experiments at lower SNR levels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号