首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In vitro data from a realistic-geometry electrolytic tank were used to demonstrate the consequences of computational issues critical to the ill-posed inverse problem in electrocardiography. The boundary element method was used to discretize the relationship between the body surface potentials and epicardial cage potentials. Variants of Tikhonov regularization were used to stabilize the inversion of the body surface potentials in order to reconstruct the epicardial surface potentials. The computational issues investigated were (1) computation of the regularization parameter; (2) effects of inaccuracy in locating the position of the heart; and (3) incorporation of a priori information on the properties of epicardial potentials into the regularization methodology. Two methods were suggested by which a priori information could be incorporated into the regularization formulation: (1) use of an estimate of the epicardial potential distribution everywhere on the surface and (2) use of regional bounds on the excursion of the potential. Results indicate that the a posteriori technique called CRESO, developed by Colli Franzone and coworkers, most consistently derives the regularization parameter closest to the optimal parameter for this experimental situation. The sensitivity of the inverse computation in a realistic-geometry torso to inaccuracies in estimating heart position are consistent with results from the eccentric spheres model; errors of 1 cm are well tolerated, but errors of 2 cm or greater result in a loss of position and amplitude information. Finally, estimates and bounds based on accurate, known information successfully lower the relative error associated with the inverse and have the potential to significantly enhance the amplitude and feature position information obtainable from the inverse-reconstructed epicardial potential map.  相似文献   

2.
The inverse problem of electrocardiography, the computation of epicardial potentials from body surface potentials, is influenced by the desired resolution on the epicardium, the number of recording points on the body surface, and the method of limiting the inversion process. To examine the role of these variables in the computation of the inverse transform, Tikhonov's zero-order regularization and singular value decomposition (SVD) have been used to invert the forward transfer matrix. The inverses have been compared in a data-independent manner using the resolution and the noise amplification as endpoints. Sets of 32, 50, 192, and 384 leads were chosen as sets of body surface data, and 26, 50, 74, and 98 regions were chosen to represent the epicardium.The resolution and noise were both improved by using a greater number of electrodes on the body surface. When 60% of the singular values are retained, the results show a trade-off between noise and resolution, with typical maximal epicardial noise levels of less than 0.5% of maximum epicardial potentials for 26 epicardial regions, 2.5% for 50 epicardial regions, 7.5% for 74 epicardial regions, and 50% for 98 epicardial regions. As the number of epicardial regions is increased, the regularization technique effectively fixes the noise amplification but markedly decreases the resolution, whereas SVD results in an increase in noise and a moderate decrease in resolution. Overall the regularization technique performs slightly better than SVD in the noise-resolution relationship.There is a region at the posterior of the heart that was poorly resolved regardless of the number of regions chosen. The variance of the resolution was such as to suggest the use of variable-size epicardial regions based on the resolution.  相似文献   

3.
本文对应用数值方法求解三维心电图逆问题时出现的解的不稳定现象,提出了两种有效的解决办法,即提高电导率值和选用包含适当的正则因子的阻尼最小二乘法。文中应用有限元和边界元结合的方法,在一个包含各向异性导电性肌肉层的三维人体模型下进行了心外膜电位的重构计算。其结果证明这种方法对提高心电图逆问题的数值稳定性和解的精度非常有效。  相似文献   

4.
The inverse problem in electrocardiography is studied analytically using a concentric spheres model with no symmetry assumptions on the potential distribution. The mathematical formulation is presented, and existence and uniqueness of the solution are briefly discussed. Solution to the inverse problem is inherently very unstable. The magnitude of this instability is demonstrated using the derived analytical inverse solution for the spherical model. Regularization methods used to date are based on a regularization parameter that does not relate to any measurable physiological parameters. This paper presents a regularization method that is based on a parameter in the form of an a priori bound on the L2 norm of the inverse solution. Such a bound can be obtained from the theoretical estimates based on the measured values of the body surface potentials together with experimental knowledge about the magnitudes of the epicardial potentials. Based on the presented regularization, an exact form of the regularized solution and estimates of its accuracy are derived.  相似文献   

5.
PurposePositron emission tomography (PET) images tend to be significantly degraded by the partial volume effect (PVE) resulting from the limited spatial resolution of the reconstructed images. Our purpose is to propose a partial volume correction (PVC) method to tackle this issue.MethodsIn the present work, we explore a voxel-based PVC method under the least squares framework (LS) employing anatomical non-local means (NLMA) regularization. The well-known non-local means (NLM) filter utilizes the high degree of information redundancy that typically exists in images, and is typically used to directly reduce image noise by replacing each voxel intensity with a weighted average of its non-local neighbors. Here we explore NLM as a regularization term within iterative-deconvolution model to perform PVC. Further, an anatomical-guided version of NLM was proposed that incorporates MRI information into NLM to improve resolution and suppress image noise. The proposed approach makes subtle usage of the accompanying MRI information to define a more appropriate search space within the prior model. To optimize the regularized LS objective function, we used the Gauss-Seidel (GS) algorithm with the one-step-late (OSL) technique.ResultsAfter the import of NLMA, the visual and quality results are all improved. With a visual check, we notice that NLMA reduce the noise compared to other PVC methods. This is also validated in bias-noise curve compared to non-MRI-guided PVC framework. We can see NLMA gives better bias-noise trade-off compared to other PVC methods.ConclusionsOur efforts were evaluated in the base of amyloid brain PET imaging using the BrainWeb phantom and in vivo human data. We also compared our method with other PVC methods. Overall, we demonstrated the value of introducing subtle MRI-guidance in the regularization process, the proposed NLMA method resulting in promising visual as well as quantitative performance improvements.  相似文献   

6.
One of the fundamental problems in theoretical electrocardiography can be characterized by an inverse problem. We present new methods for achieving better estimates of heart surface potential distributions in terms of torso potentials through an inverse procedure. First, we outline an automatic adaptive refinement algorithm that minimizes the spatial discretization error in the transfer matrix, increasing the accuracy of the inverse solution. Second, we introduce a new local regularization procedure, which works by partitioning the global transfer matrix into sub-matrices, allowing for varying amounts of smoothing. Each submatrix represents a region within the underlying geometric model in which regularization can be specifically ‘tuned’ using an a priori scheme based on the L-curve method. This local regularization method can provide a substantial increase in accuracy compared to global regularization schemes. Within this context of local regularization, we show that a generalized version of the singular value decomposition (GSVD) can further improve the accuracy of ECG inverse solutions compared to standard SVD and Tikhonov approaches. We conclude with specific examples of these techniques using geometric models of the human thorax derived from MRI data.  相似文献   

7.
The theory of photon count histogram (PCH) analysis describes the distribution of fluorescence fluctuation amplitudes due to populations of fluorophores diffusing through a focused laser beam and provides a rigorous framework through which the brightnesses and concentrations of the fluorophores can be determined. In practice, however, the brightnesses and concentrations of only a few components can be identified. Brightnesses and concentrations are determined by a nonlinear least-squares fit of a theoretical model to the experimental PCH derived from a record of fluorescence intensity fluctuations. The χ2 hypersurface in the neighborhood of the optimum parameter set can have varying degrees of curvature, due to the intrinsic curvature of the model, the specific parameter values of the system under study, and the relative noise in the data. Because of this varying curvature, parameters estimated from the least-squares analysis have varying degrees of uncertainty associated with them. There are several methods for assigning confidence intervals to the parameters, but these methods have different efficacies for PCH data. Here, we evaluate several approaches to confidence interval estimation for PCH data, including asymptotic standard error, likelihood joint-confidence region, likelihood confidence intervals, skew-corrected and accelerated bootstrap (BCa), and Monte Carlo residual resampling methods. We study these with a model two-dimensional membrane system for simplicity, but the principles are applicable as well to fluorophores diffusing in three-dimensional solution. Using simulated fluorescence fluctuation data, we find the BCa method to be particularly well-suited for estimating confidence intervals in PCH analysis, and several other methods to be less so. Using the BCa method and additional simulated fluctuation data, we find that confidence intervals can be reduced dramatically for a specific non-Gaussian beam profile.  相似文献   

8.
Neurodegenerative diseases such as Alzheimer''s disease present subtle anatomical brain changes before the appearance of clinical symptoms. Manual structure segmentation is long and tedious and although automatic methods exist, they are often performed in a cross-sectional manner where each time-point is analyzed independently. With such analysis methods, bias, error and longitudinal noise may be introduced. Noise due to MR scanners and other physiological effects may also introduce variability in the measurement. We propose to use 4D non-linear registration with spatio-temporal regularization to correct for potential longitudinal inconsistencies in the context of structure segmentation. The major contribution of this article is the use of individual template creation with spatio-temporal regularization of the deformation fields for each subject. We validate our method with different sets of real MRI data, compare it to available longitudinal methods such as FreeSurfer, SPM12, QUARC, TBM, and KNBSI, and demonstrate that spatially local temporal regularization yields more consistent rates of change of global structures resulting in better statistical power to detect significant changes over time and between populations.  相似文献   

9.
This paper discusses the suitability, in terms of noise reduction, of various methods which can be applied to an image type often used in radiation therapy: the portal image. Among these methods, the analysis focuses on those operating in the wavelet domain. Wavelet-based methods tested on natural images – such as the thresholding of the wavelet coefficients, the minimization of the Stein unbiased risk estimator on a linear expansion of thresholds (SURE-LET), and the Bayes least-squares method using as a prior a Gaussian scale mixture (BLS-GSM method) – are compared with other methods that operate on the image domain – an adaptive Wiener filter and a nonlocal mean filter (NLM). For the assessment of the performance, the peak signal-to-noise ratio (PSNR), the structural similarity index (SSIM), the Pearson correlation coefficient, and the Spearman rank correlation (ρ) coefficient are used. The performance of the wavelet filters and the NLM method are similar, but wavelet filters outperform the Wiener filter in terms of portal image denoising. It is shown how BLS-GSM and NLM filters produce the smoothest image, while keeping soft-tissue and bone contrast. As for the computational cost, filters using a decimated wavelet transform (decimated thresholding and SURE-LET) turn out to be the most efficient, with calculation times around 1 s.  相似文献   

10.
人体躯干模型中肺的存在对体表电位分布的影响   总被引:2,自引:0,他引:2  
在所建三维人体躯干模型的基础上,给出了如何应用边界元方法对非均匀人体场域进行求解。在设定心外膜电位分别呈现为单偶极子和双偶极子时,求出相应非均匀场域中的体表电位分布,并将它们与相同情况下均匀场域的体表电位分布进行分析比较。结果表明:躯干模型中肺的存在虽然对体表电位中极值的大小和位置没有太大影响,但却较大程序地改变了整个体表电位的分布状况,具体地说,就是由于肺的存在使得体表电位值较均匀时的相对误差高  相似文献   

11.
Two methods to improve on the accuracy of the Tikhonov regularization technique commonly used for the stable recovery of solutions to ill-posed problems are presented. These methods do not require a priori knowledge of the properties of the solution or of the error. Rather they exploit the observed properties of overregularized and underregularized Tikhonov solutions so as to impose linear constraints on the sought-after solution. The two methods were applied to the inverse problem of electrocardiography using a spherical heart-torso model and simulated inner-sphere (epicardial) and outer-sphere (body) potential distributions. It is shown that if the overregularized and underregularized Tikhonov solutions are chosen properly, the two methods yield epicardial solutions that are not only more accurate than the optimal Tikhonov solution but also provide other qualitative information, such as correct position of the extrema, not obtainable using ordinary Tikhonov regularization. A heuristic method to select the overregularized and underregularized solutions is discussed.  相似文献   

12.
Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as “ALM-ANAD”. The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics.  相似文献   

13.
Schuck P  Rossmanith P 《Biopolymers》2000,54(5):328-341
A new method is presented for the calculation of apparent sedimentation coefficient distributions g*(s) for the size-distribution analysis of polymers in sedimentation velocity experiments. Direct linear least-squares boundary modeling by a superposition of sedimentation profiles of ideal nondiffusing particles is employed. It can be combined with algebraic noise decomposition techniques for the application to interference optical ultracentrifuge data at low loading concentrations with significant systematic noise components. Because of the use of direct boundary modeling, residuals are available for assessment of the quality of the fits and the consistency of the g*(s) distribution with the experimental data. The method can be combined with regularization techniques based on F statistics, such as used in the program CONTIN, or alternatively, the increment of s values can be adjusted empirically. The method is simple, has advantageous statistical properties, and reveals precise sedimentation coefficients. The new least-squares ls-g*(s) exhibits a very high robustness and resolution if data acquired over a large time interval are analyzed. This can result in a high resolution for large particles, and for samples with a high degree of heterogeneity. Because the method does not require a high frequency of scans, it can also be easily used in experiments with the absorbance optical scanning system. Published 2000 John Wiley & Sons, Inc.  相似文献   

14.
Methods of least squares and SIRT in reconstruction.   总被引:1,自引:0,他引:1  
In this paper we show that a particular version of the Simultaneous Iterative Reconstruction Technique (SIRT) proposed by Gilbert in 1972 strongly resembles the Richardson least-squares algorithm.By adopting the adjustable parameters of the general Richardson algorithm, we have been able to produce generalized SIRT algorithms with improved convergence.A particular generalization of the SIRT algorithm, GSIRT, has an adjustable parameter σ and the starting picture ρ0 as input. A value 12 for σ and a weighted back-projection for ρ0 produce a stable algorithm.We call the SIRT-like algorithms for the solution of the weighted leastsquares problems LSIRT and present two such algorithms, LSIRT1 and LSIRT2, which have definite computational advantages over SIRT and GSIRT.We have tested these methods on mathematically simulated phantoms and find that the new SIRT methods converge faster than Gilbert's SIRT but are more sensitive to noise present in the data. However, the faster convergence rates allow termination before the noise contribution degrades the reconstructed image excessively.  相似文献   

15.
The inverse problem of electrocardiography (specifically, that part concerned with the computation of the ventricular surface activation isochrones) is shown to be formally equivalent to the problem of identification and measurement of discontinuities in derivatives of body surface potentials. This is based on the demonstration that such measurements allow localization of the relative extrema of the ventricular surface activation map (given a forward problem solution), which in turn restricts the space of admissible solution maps to a compact set. Although the inverse problem and the problem of identifying derivative discontinuities are both ill-posed, it is possible that the latter may be more easily or justifiably resolved with available information, particularly as current methods for regularizing the inverse problem typically rely on a regularization parameter chosen in an a posteriori fashion. An example of the power of the approach is the demonstration that a recent Uniform Dipole Layer Hypothesis-based method for producing the ventricular surface activation map is largely independent on that hypothesis and capable in principle of generating maps that are very similar in a precise sense to those that would result from the usual epicardial potential formulation (assuming the latter were capable of producing intrinsic deflections in computed epicardial electrograms sufficiently steep to accurately compute the activation map). This is consistent with the preliminary success of the former method, despite the significant inaccuracy of its underlying assumption.  相似文献   

16.
This paper presents a computationally efficient algorithm for image compressive sensing reconstruction using a second degree total variation (HDTV2) regularization. Firstly, a preferably equivalent formulation of the HDTV2 functional is derived, which can be formulated as a weighted L 1-L 2 mixed norm of second degree image derivatives under the spectral decomposition framework. Secondly, using the equivalent formulation of HDTV2, we introduce an efficient forward-backward splitting (FBS) scheme to solve the HDTV2-based image reconstruction model. Furthermore, from the averaged non-expansive operator point of view, we make a detailed analysis on the convergence of the proposed FBS algorithm. Experiments on medical images demonstrate that the proposed method outperforms several fast algorithms of the TV and HDTV2 reconstruction models in terms of peak signal to noise ratio (PSNR), structural similarity index (SSIM) and convergence speed.  相似文献   

17.
This work considers the approximation of the cardiac bidomain equations, either isolated or coupled with the torso, via first order semi-implicit time-marching schemes involving a fully decoupled computation of the unknown fields (ionic state, transmembrane potential, extracellular and torso potentials). For the isolated bidomain system, we show that the Gauss-Seidel and Jacobi like splittings do not compromise energy stability; they simply alter the energy norm. Within the framework of the numerical simulation of electrocardiograms (ECG), these bidomain splittings are combined with an explicit Robin-Robin treatment of the heart-torso coupling conditions. We show that the resulting schemes allow a fully decoupled (energy) stable computation of the heart and torso fields, under an additional hyperbolic-CFL like condition. The accuracy and convergence rate of the considered schemes are investigated numerically with a series of numerical experiments.  相似文献   

18.
We continue our efforts in modeling Daphnia magna, a species of water flea, by proposing a continuously structured population model incorporating density-dependent and density-independent fecundity and mortality rates. We collected new individual-level data to parameterize the individual demographics relating food availability and individual daphnid growth. Our model is fit to experimental data using the generalized least-squares framework, and we use cross-validation and Akaike Information Criteria to select hyper-parameters. We present our confidence intervals on parameter estimates.  相似文献   

19.
This study presents a comparison of semi-analytical and numerical solution techniques for solving the passive bidomain equation in simple tissue geometries containing a region of subendocardial ischaemia. When the semi-analytical solution is based on Fourier transforms, recovering the solution from the frequency domain via fast Fourier transforms imposes a periodic boundary condition on the solution of the partial differential equation. On the other hand, the numerical solution uses an insulation boundary condition. When these techniques are applied to calculate the epicardial surface potentials, both yield a three well potential distribution which is identical if fibre rotation within the tissue is ignored. However, when fibre rotation is included, the resulting three-well distribution rotates, but through different angles, depending on the solution method. A quantitative comparison between the semi-analytical and numerical solutiontechniques is presented in terms of the effect fibre rotation has on the rotation of the epicardial potential distribution. It turns out that the Fourier transform approach predicts a larger rotation of the epicardial potential distribution than the numerical solution. The conclusion from this study is that it is not always possible to use analytical or semi-analytical solutions to check the accuracy of numerical solution procedures. For the problem considered here, this checking is only possible when it is assumed that there is no fibre rotation through the tissue.  相似文献   

20.
Community detection is a fundamental problem in the analysis of complex networks. Recently, many researchers have concentrated on the detection of overlapping communities, where a vertex may belong to more than one community. However, most current methods require the number (or the size) of the communities as a priori information, which is usually unavailable in real-world networks. Thus, a practical algorithm should not only find the overlapping community structure, but also automatically determine the number of communities. Furthermore, it is preferable if this method is able to reveal the hierarchical structure of networks as well. In this work, we firstly propose a generative model that employs a nonnegative matrix factorization (NMF) formulization with a l2,1 norm regularization term, balanced by a resolution parameter. The NMF has the nature that provides overlapping community structure by assigning soft membership variables to each vertex; the l2,1 regularization term is a technique of group sparsity which can automatically determine the number of communities by penalizing too many nonempty communities; and hence the resolution parameter enables us to explore the hierarchical structure of networks. Thereafter, we derive the multiplicative update rule to learn the model parameters, and offer the proof of its correctness. Finally, we test our approach on a variety of synthetic and real-world networks, and compare it with some state-of-the-art algorithms. The results validate the superior performance of our new method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号