首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In vitro data from a realistic-geometry electrolytic tank were used to demonstrate the consequences of computational issues critical to the ill-posed inverse problem in electrocardiography. The boundary element method was used to discretize the relationship between the body surface potentials and epicardial cage potentials. Variants of Tikhonov regularization were used to stabilize the inversion of the body surface potentials in order to reconstruct the epicardial surface potentials. The computational issues investigated were (1) computation of the regularization parameter; (2) effects of inaccuracy in locating the position of the heart; and (3) incorporation of a priori information on the properties of epicardial potentials into the regularization methodology. Two methods were suggested by which a priori information could be incorporated into the regularization formulation: (1) use of an estimate of the epicardial potential distribution everywhere on the surface and (2) use of regional bounds on the excursion of the potential. Results indicate that the a posteriori technique called CRESO, developed by Colli Franzone and coworkers, most consistently derives the regularization parameter closest to the optimal parameter for this experimental situation. The sensitivity of the inverse computation in a realistic-geometry torso to inaccuracies in estimating heart position are consistent with results from the eccentric spheres model; errors of 1 cm are well tolerated, but errors of 2 cm or greater result in a loss of position and amplitude information. Finally, estimates and bounds based on accurate, known information successfully lower the relative error associated with the inverse and have the potential to significantly enhance the amplitude and feature position information obtainable from the inverse-reconstructed epicardial potential map.  相似文献   

2.
The use of several mathematical methods for estimating epicardial ECG potentials from arrays of body surface potentials has been reported in the literature; most of these methods are based on least-squares reconstruction principles and operate in the time-space domain. In this paper we introduce a general Bayesian maximum a posteriori (MAP) framework for time domain inverse solutions in the presence of noise. The two most popular previously applied least-squares methods, constrained (regularized) least-squares and low-rank approximation through the singular value decomposition, are placed in this framework, each of them requiring the a priori knowledge of a ‘regularization parameter’, which defines the degree of smoothing to be applied to the inversion. Results of simulations using these two methods are presented; they compare the ability of each method to reconstruct epicardial potentials. We used the geometric configuration of the torso and internal organs of an individual subject as reconstructed from CT scans. The accuracy of each method at each epicardial location was tested as a function of measurement noise, the size and shape of the subarray of torso sensors, and the regularization parameter. We paid particular attention to an assessment of the potential of these methods for clinical use by testing the effect of using compact, small-size subarrays of torso potentials while maintaining a high degree of resolution on the epicardium.  相似文献   

3.
One of the fundamental problems in theoretical electrocardiography can be characterized by an inverse problem. We present new methods for achieving better estimates of heart surface potential distributions in terms of torso potentials through an inverse procedure. First, we outline an automatic adaptive refinement algorithm that minimizes the spatial discretization error in the transfer matrix, increasing the accuracy of the inverse solution. Second, we introduce a new local regularization procedure, which works by partitioning the global transfer matrix into sub-matrices, allowing for varying amounts of smoothing. Each submatrix represents a region within the underlying geometric model in which regularization can be specifically ‘tuned’ using an a priori scheme based on the L-curve method. This local regularization method can provide a substantial increase in accuracy compared to global regularization schemes. Within this context of local regularization, we show that a generalized version of the singular value decomposition (GSVD) can further improve the accuracy of ECG inverse solutions compared to standard SVD and Tikhonov approaches. We conclude with specific examples of these techniques using geometric models of the human thorax derived from MRI data.  相似文献   

4.
Cw-ESR distance measurement method is extremely valuable for studying the dynamics-function relationship of biomolecules. However, extracting distance distributions from experiments has been a highly technique-demanding procedure. It has never been conclusively identified, to our knowledge, that the problems involved in the analysis are ill posed and are best solved using Tikhonov regularization. We treat the problems from a novel point of view. First of all, we identify the equations involved and uncover that they are actually two linear first-kind Fredholm integral equations. They can be combined into one single linear inverse problem and solved in a Tikhonov regularization procedure. The improvement with our new treatment is significant. Our approach is a direct and reliable mathematical method capable of providing an unambiguous solution to the ill-posed problem. It need not perform nonlinear least-squares fitting to infer a solution from noise-contaminated data and, accordingly, substantially reduces the computation time and the difficulty of analysis. Numerical tests and experimental data of polyproline II peptides with variant spin-labeled sites are provided to demonstrate our approach. The high resolution of the distance distributions obtainable with our new approach enables a detailed insight into the flexibility of dynamic structure and the identification of conformational species in solution state.  相似文献   

5.
The inverse problem in electrocardiography is studied analytically using a concentric spheres model with no symmetry assumptions on the potential distribution. The mathematical formulation is presented, and existence and uniqueness of the solution are briefly discussed. Solution to the inverse problem is inherently very unstable. The magnitude of this instability is demonstrated using the derived analytical inverse solution for the spherical model. Regularization methods used to date are based on a regularization parameter that does not relate to any measurable physiological parameters. This paper presents a regularization method that is based on a parameter in the form of an a priori bound on the L2 norm of the inverse solution. Such a bound can be obtained from the theoretical estimates based on the measured values of the body surface potentials together with experimental knowledge about the magnitudes of the epicardial potentials. Based on the presented regularization, an exact form of the regularized solution and estimates of its accuracy are derived.  相似文献   

6.
脑磁图(magnetoencephalogram,MEG)研究中的磁源分布图象重建,属于不适定问题,需要引入适合的先验约束,把它转化为适定问题。采用非参数的分布源模型,磁源成象问题即为求解病态的欠定的线性方程组。这里采用的方法是建立在最小模估计和Tikhonov正则的基础上,从数学算法本身及相关的解剖学和神经生理学的信息,对解空间加以限制,提出了区域加权算子,再结合深工加权,以期得到合理的神经电流分布。通过仿真实验表明能得出理想的重建结果,同时讨论了该方法的局限性以及下一步的工作方向。  相似文献   

7.
The inverse problem of electrocardiography (specifically, that part concerned with the computation of the ventricular surface activation isochrones) is shown to be formally equivalent to the problem of identification and measurement of discontinuities in derivatives of body surface potentials. This is based on the demonstration that such measurements allow localization of the relative extrema of the ventricular surface activation map (given a forward problem solution), which in turn restricts the space of admissible solution maps to a compact set. Although the inverse problem and the problem of identifying derivative discontinuities are both ill-posed, it is possible that the latter may be more easily or justifiably resolved with available information, particularly as current methods for regularizing the inverse problem typically rely on a regularization parameter chosen in an a posteriori fashion. An example of the power of the approach is the demonstration that a recent Uniform Dipole Layer Hypothesis-based method for producing the ventricular surface activation map is largely independent on that hypothesis and capable in principle of generating maps that are very similar in a precise sense to those that would result from the usual epicardial potential formulation (assuming the latter were capable of producing intrinsic deflections in computed epicardial electrograms sufficiently steep to accurately compute the activation map). This is consistent with the preliminary success of the former method, despite the significant inaccuracy of its underlying assumption.  相似文献   

8.
Polysaccharides play a significant role in food systems as texture forming agent. Their solutions posses some peculiar rheological behaviors. Some information about structure of these biopolymers and relaxation processes occurring in their solutions may be obtained by investigation of linear viscoelasticity properties. The aim of this research was to investigate viscoelastic properties of systems consisting of waxy maize starch–hydrocolloid–water. To achieve this aim two techniques were applied: rheological measurements in frequency domain and structural studies by means of AFM method. The results were interpreted on basis of time–temperature superposition principle and phenomenological theory of viscoelasticity. Thermal stability of analyzed systems allowed applying time-temperature superposition principle. Calculation of aT parameter enabled to obtaine the master curves. Continuous Maxwell model was applied to analyze phase separation in examined systems. Relaxations spectra obtained by Tikhonov regularization method were heterogeneous, indicating on non-homogenous structure of system.  相似文献   

9.
The inverse problem of electrocardiography, the computation of epicardial potentials from body surface potentials, is influenced by the desired resolution on the epicardium, the number of recording points on the body surface, and the method of limiting the inversion process. To examine the role of these variables in the computation of the inverse transform, Tikhonov's zero-order regularization and singular value decomposition (SVD) have been used to invert the forward transfer matrix. The inverses have been compared in a data-independent manner using the resolution and the noise amplification as endpoints. Sets of 32, 50, 192, and 384 leads were chosen as sets of body surface data, and 26, 50, 74, and 98 regions were chosen to represent the epicardium.The resolution and noise were both improved by using a greater number of electrodes on the body surface. When 60% of the singular values are retained, the results show a trade-off between noise and resolution, with typical maximal epicardial noise levels of less than 0.5% of maximum epicardial potentials for 26 epicardial regions, 2.5% for 50 epicardial regions, 7.5% for 74 epicardial regions, and 50% for 98 epicardial regions. As the number of epicardial regions is increased, the regularization technique effectively fixes the noise amplification but markedly decreases the resolution, whereas SVD results in an increase in noise and a moderate decrease in resolution. Overall the regularization technique performs slightly better than SVD in the noise-resolution relationship.There is a region at the posterior of the heart that was poorly resolved regardless of the number of regions chosen. The variance of the resolution was such as to suggest the use of variable-size epicardial regions based on the resolution.  相似文献   

10.
We briefly review the results of other authors concerning the analysis of systems with time hierarchy, especially the Tikhonov theorem. A theorem, recently proved by the authors, making possible rigorous analysis of systems with complex fast dynamics is stated and discussed. A model example of a simple enzymatic reaction with product activation and slow (genetically driven) enzyme turnover is rigorously studied. It is shown that even in such a simple model there exist certain regions of parameters for which fast variables oscillate. Thus the classical Tikhonov theorem is not applicable here and we are forced to use another method-for example the author's presented theorem—or a purely numerical solution. These two methods are compared.  相似文献   

11.
Recently, the regularized coding-based classification methods (e.g. SRC and CRC) show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR) for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients) and the specific information (weight matrix of image pixels) to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel) weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR) and robust general regression and representation classifier (R-GRR). The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.  相似文献   

12.
During hyperthermia therapy it is desirable to know the entire temperature field in the treatment region. However, accurately inferring this field from the limited number of temperature measurements available is very difficult, and thus state and parameter estimation methods have been used to attempt to solve this inherently ill-posed problem. To compensate for this ill-posedness and to improve the accuracy of this method, Tikhonov regularization of order zero has been used to significantly improve the results of the estimation procedure. It is also shown that the accuracies of the temperature estimates depend upon the value of the regularization parameter, which has an optimal value that is dependent on the perfusion pattern and magnitude. In addition, the transient power-off time sampling period (i.e., the length of time over which transient data is collected and used) influences the accuracy of the estimates, and an optimal sampling period is shown to exist. The effects of additive measurement noise are also investigated, as are the effects of the initial guess of the perfusion values, and the effects of both symmetric and asymmetric blood perfusion patterns. Random perfusion patterns with noisy data are the most difficult cases to evaluate. The cases studied are not a comprehensive set, but continue to show the feasibility of using state and parameter estimation methods to reconstruct the entire temperature field.  相似文献   

13.
Non-negative matrix factorization (NMF) condenses high-dimensional data into lower-dimensional models subject to the requirement that data can only be added, never subtracted. However, the NMF problem does not have a unique solution, creating a need for additional constraints (regularization constraints) to promote informative solutions. Regularized NMF problems are more complicated than conventional NMF problems, creating a need for computational methods that incorporate the extra constraints in a reliable way. We developed novel methods for regularized NMF based on block-coordinate descent with proximal point modification and a fast optimization procedure over the alpha simplex. Our framework has important advantages in that it (a) accommodates for a wide range of regularization terms, including sparsity-inducing terms like the penalty, (b) guarantees that the solutions satisfy necessary conditions for optimality, ensuring that the results have well-defined numerical meaning, (c) allows the scale of the solution to be controlled exactly, and (d) is computationally efficient. We illustrate the use of our approach on in the context of gene expression microarray data analysis. The improvements described remedy key limitations of previous proposals, strengthen the theoretical basis of regularized NMF, and facilitate the use of regularized NMF in applications.  相似文献   

14.
This paper describes a procedure, based on Tikhonov regularization, for extracting the shear stress versus shear rate relationship and yield stress of blood from capillary viscometry data. The relevant equations and the mathematical nature of the problem are briefly described. The procedure is then applied to three sets of capillary viscometry data of blood taken from the literature. From each data set the procedure computes the complete shear stress versus shear rate relationship and the yield stress. Since the procedure does not rely on any assumed constitutive equation, the computed rheological properties are therefore model-independent. These properties are compared against one another and against independent measurements. They are found to be in good agreement for shear stress greater than 0.1 Pa but show significant deviations for shear stress below this level. A possible way of improving this situation is discussed.  相似文献   

15.
This paper describes a procedure, based on Tikhonov regularization, for obtaining the shear rate function or equivalently the viscosity function of blood from Couette viscometry data. For data sets that include points where the sample in the annulus is partially sheared the yield stress of blood will also be obtained. For data sets that do not contain partially sheared points, provided the shear stress is sufficiently low, a different method of estimating the yield stress is proposed. Both the shear rate function and yield stress obtained in this investigation are independent of any rheological model of blood. This procedure is applied to a large set of Couette viscometer data taken from the literature. Results in the form of shear rate and viscosity functions and yield stress are presented for a wide range of hematocrits and are compared against those reported by the originators of the data and against independently measured shear properties of blood.  相似文献   

16.
This study presents a comparison of semi-analytical and numerical solution techniques for solving the passive bidomain equation in simple tissue geometries containing a region of subendocardial ischaemia. When the semi-analytical solution is based on Fourier transforms, recovering the solution from the frequency domain via fast Fourier transforms imposes a periodic boundary condition on the solution of the partial differential equation. On the other hand, the numerical solution uses an insulation boundary condition. When these techniques are applied to calculate the epicardial surface potentials, both yield a three well potential distribution which is identical if fibre rotation within the tissue is ignored. However, when fibre rotation is included, the resulting three-well distribution rotates, but through different angles, depending on the solution method. A quantitative comparison between the semi-analytical and numerical solutiontechniques is presented in terms of the effect fibre rotation has on the rotation of the epicardial potential distribution. It turns out that the Fourier transform approach predicts a larger rotation of the epicardial potential distribution than the numerical solution. The conclusion from this study is that it is not always possible to use analytical or semi-analytical solutions to check the accuracy of numerical solution procedures. For the problem considered here, this checking is only possible when it is assumed that there is no fibre rotation through the tissue.  相似文献   

17.
Kinetic experiments provide much information about protein folding mechanisms. Time-resolved signals are often best described by expressions with many exponential terms, but this hinders the extraction of rate constants by nonlinear least squares (NLS) fitting. Numerical inverse Laplace transformation, which converts a time-resolved dataset into a spectrum of amplitudes as a function of rate constant, allows easy estimation of the rate constants, amplitudes, and number of processes underlying the data. Here, we present a Tikhonov regularization-based method that converts a dataset into a rate spectrum, subject to regularization constraints, without requiring an iterative search of parameter space. This allows more rapid generation of rate spectra as well as analysis of datasets too noisy to process by existing iterative search algorithms. This method's simplicity also permits highly objective, largely automatic analysis with minimal human guidance. We show that this regularization method reproduces results previously obtained by NLS fitting and that it is effective for analyzing datasets too complex for traditional fitting methods. This method's reliability and speed, as well as its potential for objective, model-free analysis, make it extremely useful as a first step in analysis of complicated noisy datasets and an excellent guide for subsequent NLS analysis.  相似文献   

18.
To measure spatial variations in mechanical properties of biological materials, prior studies have typically performed mechanical tests on excised specimens of tissue. Less invasive measurements, however, are preferable in many applications, such as patient-specific modeling, disease diagnosis, and tracking of age- or damage-related degradation of mechanical properties. Elasticity imaging (elastography) is a nondestructive imaging method in which the distribution of elastic properties throughout a specimen can be reconstructed from measured strain or displacement fields. To date, most work in elasticity imaging has concerned incompressible, isotropic materials. This study presents an extension of elasticity imaging to three-dimensional, compressible, transversely isotropic materials. The formulation and solution of an inverse problem for an anisotropic tissue subjected to a combination of quasi-static loads is described, and an optimization and regularization strategy that indirectly obtains the solution to the inverse problem is presented. Several applications of transversely isotropic elasticity imaging to cancellous bone from the human vertebra are then considered. The feasibility of using isotropic elasticity imaging to obtain meaningful reconstructions of the distribution of material properties for vertebral cancellous bone from experiment is established. However, using simulation, it is shown that an isotropic reconstruction is not appropriate for anisotropic materials. It is further shown that the transversely isotropic method identifies a solution that predicts the measured displacements, reveals regions of low stiffness, and recovers all five elastic parameters with approximately 10% error. The recovery of a given elastic parameter is found to require the presence of its corresponding strain (e.g., a deformation that generates ??? is necessary to reconstruct C????), and the application of regularization is shown to improve accuracy. Finally, the effects of noise on reconstruction quality is demonstrated and a signal-to-noise ratio (SNR) of 40 dB is identified as a reasonable threshold for obtaining accurate reconstructions from experimental data. This study demonstrates that given an appropriate set of displacement fields, level of regularization, and signal strength, the transversely isotropic method can recover the relative magnitudes of all five elastic parameters without an independent measurement of stress. The quality of the reconstructions improves with increasing contrast, magnitude of deformation, and asymmetry in the distributions of material properties, indicating that elasticity imaging of cancellous bone could be a useful tool in laboratory studies to monitor the progression of damage and disease in this tissue.  相似文献   

19.
Survival prediction from high-dimensional genomic data is dependent on a proper regularization method. With an increasing number of such methods proposed in the literature, comparative studies are called for and some have been performed. However, there is currently no consensus on which prediction assessment criterion should be used for time-to-event data. Without a firm knowledge about whether the choice of evaluation criterion may affect the conclusions made as to which regularization method performs best, these comparative studies may be of limited value. In this paper, four evaluation criteria are investigated: the log-rank test for two groups, the area under the time-dependent ROC curve (AUC), an R2-measure based on the Cox partial likelihood, and an R2-measure based on the Brier score. The criteria are compared according to how they rank six widely used regularization methods that are based on the Cox regression model, namely univariate selection, principal components regression (PCR), supervised PCR, partial least squares regression, ridge regression, and the lasso. Based on our application to three microarray gene expression data sets, we find that the results obtained from the widely used log-rank test deviate from the other three criteria studied. For future studies, where one also might want to include non-likelihood or non-model-based regularization methods, we argue in favor of AUC and the R2-measure based on the Brier score, as these do not suffer from the arbitrary splitting into two groups nor depend on the Cox partial likelihood.  相似文献   

20.
《Biophysical journal》2021,120(20):4590-4599
Fluorescence spectroscopy at the single-molecule scale has been indispensable for studying conformational dynamics and rare states of biological macromolecules. Single-molecule two-dimensional (2D) fluorescence lifetime correlation spectroscopy is an emerging technique that holds promise for the study of protein and nucleic acid dynamics, as the technique is 1) capable of resolving conformational dynamics using a single chromophore, 2) resolves forward and reverse transitions independently, and 3) has a dynamic window ranging from microseconds to seconds. However, the calculation of a 2D fluorescence relaxation spectrum requires an inverse Laplace transform (ILT), which is an ill-conditioned inversion that must be estimated numerically through a regularized minimization. Current methods for performing ILTs of fluorescence relaxation can be computationally inefficient, sensitive to noise corruption, and difficult to implement. Here, we adopt an approach developed for NMR spectroscopy (T1-T2 relaxometry) to perform one-dimensional (1D) and 2D-ILTs on single-molecule fluorescence spectroscopy data using singular-valued decomposition and Tikhonov regularization. This approach provides fast, robust, and easy to implement Laplace inversions of single-molecule fluorescence data. We compare this approach to the widely used maximal entropy method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号