首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Two methods to improve on the accuracy of the Tikhonov regularization technique commonly used for the stable recovery of solutions to ill-posed problems are presented. These methods do not require a priori knowledge of the properties of the solution or of the error. Rather they exploit the observed properties of overregularized and underregularized Tikhonov solutions so as to impose linear constraints on the sought-after solution. The two methods were applied to the inverse problem of electrocardiography using a spherical heart-torso model and simulated inner-sphere (epicardial) and outer-sphere (body) potential distributions. It is shown that if the overregularized and underregularized Tikhonov solutions are chosen properly, the two methods yield epicardial solutions that are not only more accurate than the optimal Tikhonov solution but also provide other qualitative information, such as correct position of the extrema, not obtainable using ordinary Tikhonov regularization. A heuristic method to select the overregularized and underregularized solutions is discussed.  相似文献   

2.
One of the fundamental problems in theoretical electrocardiography can be characterized by an inverse problem. We present new methods for achieving better estimates of heart surface potential distributions in terms of torso potentials through an inverse procedure. First, we outline an automatic adaptive refinement algorithm that minimizes the spatial discretization error in the transfer matrix, increasing the accuracy of the inverse solution. Second, we introduce a new local regularization procedure, which works by partitioning the global transfer matrix into sub-matrices, allowing for varying amounts of smoothing. Each submatrix represents a region within the underlying geometric model in which regularization can be specifically ‘tuned’ using an a priori scheme based on the L-curve method. This local regularization method can provide a substantial increase in accuracy compared to global regularization schemes. Within this context of local regularization, we show that a generalized version of the singular value decomposition (GSVD) can further improve the accuracy of ECG inverse solutions compared to standard SVD and Tikhonov approaches. We conclude with specific examples of these techniques using geometric models of the human thorax derived from MRI data.  相似文献   

3.
脑磁图(magnetoencephalogram,MEG)研究中的磁源分布图象重建,属于不适定问题,需要引入适合的先验约束,把它转化为适定问题。采用非参数的分布源模型,磁源成象问题即为求解病态的欠定的线性方程组。这里采用的方法是建立在最小模估计和Tikhonov正则的基础上,从数学算法本身及相关的解剖学和神经生理学的信息,对解空间加以限制,提出了区域加权算子,再结合深工加权,以期得到合理的神经电流分布。通过仿真实验表明能得出理想的重建结果,同时讨论了该方法的局限性以及下一步的工作方向。  相似文献   

4.
This paper describes a procedure, based on Tikhonov regularization, for extracting the shear stress versus shear rate relationship and yield stress of blood from capillary viscometry data. The relevant equations and the mathematical nature of the problem are briefly described. The procedure is then applied to three sets of capillary viscometry data of blood taken from the literature. From each data set the procedure computes the complete shear stress versus shear rate relationship and the yield stress. Since the procedure does not rely on any assumed constitutive equation, the computed rheological properties are therefore model-independent. These properties are compared against one another and against independent measurements. They are found to be in good agreement for shear stress greater than 0.1 Pa but show significant deviations for shear stress below this level. A possible way of improving this situation is discussed.  相似文献   

5.
Computational Approaches to Solving Equations Arising from Wound Healing   总被引:1,自引:0,他引:1  
In the wound healing process, the cell movement associated with chemotaxis generally outweighs the movement associated with random motion, leading to advection-dominated mathematical models of wound healing. The equations in these models must be solved with care, but often inappropriate approaches are adopted. Two one-dimensional test problems arising from advection-dominated models of wound healing are solved using four algorithms—MATLAB’s inbuilt routine pdepe.m, the Numerical Algorithms Group routine d03pcf.f, and two finite volume methods. The first finite volume method is based on a first-order upwinding treatment of chemotaxis terms and the second on a flux limiting approach. The first test problem admits an analytic solution which can be used to validate the numerical results by analyzing two measures of the error for each method: the average absolute difference and a mass balance error. These criteria as well as the visual comparison between the numerical methods and the exact solution lead us to conclude that flux limiting is the best approach to solving advection-dominated wound healing problems numerically in one dimension. The second test problem is a coupled nonlinear three species model of wound healing angiogenesis. Measurement of the mass balance error for this test problem further confirms our hypothesis that flux limiting is the most appropriate method for solving advection-dominated governing equations in wound healing models. We also consider two two-dimensional test problems arising from wound healing, one that admits an analytic solution and a more complicated problem of blood vessels growth into a devascularized wound bed. The results from the two-dimensional test problems also demonstrate that the flux limiting treatment of advective terms is ideal for an advection-dominated problem.  相似文献   

6.
Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.  相似文献   

7.
In this work we address the problem of the robust identification of unknown parameters of a cell population dynamics model from experimental data on the kinetics of cells labelled with a fluorescence marker defining the division age of the cell. The model is formulated by a first order hyperbolic PDE for the distribution of cells with respect to the structure variable x (or z) being the intensity level (or the log10-transformed intensity level) of the marker. The parameters of the model are the rate functions of cell division, death, label decay and the label dilution factor. We develop a computational approach to the identification of the model parameters with a particular focus on the cell birth rate α(z) as a function of the marker intensity, assuming the other model parameters are scalars to be estimated. To solve the inverse problem numerically, we parameterize α(z) and apply a maximum likelihood approach. The parametrization is based on cubic Hermite splines defined on a coarse mesh with either equally spaced a priori fixed nodes or nodes to be determined in the parameter estimation procedure. Ill-posedness of the inverse problem is indicated by multiple minima. To treat the ill-posed problem, we apply Tikhonov regularization with the regularization parameter determined by the discrepancy principle. We show that the solution of the regularized parameter estimation problem is consistent with the data set with an accuracy within the noise level in the measurements.   相似文献   

8.
An efficient rank based approach for closest string and closest substring   总被引:1,自引:0,他引:1  
Dinu LP  Ionescu R 《PloS one》2012,7(6):e37576
This paper aims to present a new genetic approach that uses rank distance for solving two known NP-hard problems, and to compare rank distance with other distance measures for strings. The two NP-hard problems we are trying to solve are closest string and closest substring. For each problem we build a genetic algorithm and we describe the genetic operations involved. Both genetic algorithms use a fitness function based on rank distance. We compare our algorithms with other genetic algorithms that use different distance measures, such as Hamming distance or Levenshtein distance, on real DNA sequences. Our experiments show that the genetic algorithms based on rank distance have the best results.  相似文献   

9.
Large deformation analysis of orthodontic appliances   总被引:4,自引:0,他引:4  
The deformations of orthodontic appliances used for space closure are large so that any mathematical analysis will require a nonlinear approach. Existing incremental finite element and finite difference numerical methods suffer from excessive computational effort when analyzing these problems. An accurate segmental technique is proposed to handle these difficulties in an extremely efficient fashion. The segmental technique starts by assuming that an orthodontic appliance is composed of a number of smaller segments, the ends of which undergo small relative rotation. With an appropriate choice of local coordinate system the equilibrium equations for each segment are linearized and solved in a straightforward manner. The segments are then assembled using geometric and force compatibility relations similar to the transfer matrix method. Consequently, the original nonlinear boundary value problem is solved as a sequence of linear initial value problems which converge to the required boundary conditions. As only one segment need be considered at a time, the computations can be performed accurately and efficiently on a PC type computer. Although an iterative solution is used to match the boundary conditions, the time required to solve a given problem ranges from a few seconds to a couple of minutes depending on the initial geometric complexity. The accuracy of the segmental technique is verified by comparison with an exact solution for an initially curved cantilever beam with an end load. In addition, comparisons are made with existing experimental and numerical results as well as with a new set of experimental data. In all cases the segmental technique is in excellent agreement with the results of these other studies.  相似文献   

10.
Equation-solving programs for microcomputers make the numericalsolution of algebraic equations an easy task. It is no longernecessary to learn or to program algorithms for the solutionof many different types of equations. A single equation or aset of simultaneous equations may simply be entered into thecomputer and numerically solved for unknowns without concernas to whether the equations are linear or non-linear. Severalexamples of possible applications of equation-solving programsare discussed. Solution times for these examples are given forSEQS on the Apple II and Macintosh computers. The example setsof equations, which include chemical equilibrium and enzymekinetics problems, have been chosen to demonstrate importantaspects of the uses and limitations of equation solving. Thefour examples discussed are: a two-compartment pharmacokineticmodel, citric acid ionization in aqueous solution, an enzymeinhibition model, and an example of the application of an equation-solvingprogram in doing a simple non-linear regression problem.  相似文献   

11.
We investigate the possibility of using body surface potential maps to image the extracellular potassium concentration during regional ischemia. The problem is formulated as an inverse problem based on a linear approximation of the bidomain model, where we minimize the difference between the results of the model and observations of body surface potentials. The minimization problem is solved by a one-shot technique, where the original PDE system, an adjoint problem, and the relation describing the minimum, are solved simultaneously. This formulation of the problem requires the solution of a 5 × 5 system of linear partial differential equations. The performance of the model is investigated by performing tests based on synthetic data. We find that the model will in many cases detect the correct position and approximate size of the ischemic regions, while some cases are more difficult to locate. It is observed that a simple post-processing of the results produces images that are qualitatively very similar to the true solution.  相似文献   

12.
Non-negative matrix factorization (NMF) condenses high-dimensional data into lower-dimensional models subject to the requirement that data can only be added, never subtracted. However, the NMF problem does not have a unique solution, creating a need for additional constraints (regularization constraints) to promote informative solutions. Regularized NMF problems are more complicated than conventional NMF problems, creating a need for computational methods that incorporate the extra constraints in a reliable way. We developed novel methods for regularized NMF based on block-coordinate descent with proximal point modification and a fast optimization procedure over the alpha simplex. Our framework has important advantages in that it (a) accommodates for a wide range of regularization terms, including sparsity-inducing terms like the penalty, (b) guarantees that the solutions satisfy necessary conditions for optimality, ensuring that the results have well-defined numerical meaning, (c) allows the scale of the solution to be controlled exactly, and (d) is computationally efficient. We illustrate the use of our approach on in the context of gene expression microarray data analysis. The improvements described remedy key limitations of previous proposals, strengthen the theoretical basis of regularized NMF, and facilitate the use of regularized NMF in applications.  相似文献   

13.
This assembly system design problem (ASDP) is to prescribe the minimum-cost assignment of machines, tooling, and tasks to stations, observing task precedence relationships and cycle time requirements. The ASDP with tool changes (ASDPTCs) also prescribes the optimal sequence of operations at each station, including tool changes, which are important, for example, in robotic assembly. A unique solution approach decomposes the model into a master problem, which is a minimum-cost network-flow problem that can be solved as a linear program, and subproblems, which are constrained, shortest-path problems that generate station configurations. Subproblems are solved on state-operation networks, which extend earlier formulations to incorporate tooling considerations. This paper presents a specialized algorithm to solve the subproblems. Computational tests benchmark the approach on several classes of problems, and the results are promising. In particular, tests demonstrate the importance of using engineering judgment to manage problem complexity by controlling the size of state-operation networks  相似文献   

14.
Haplotype data are especially important in the study of complex diseases since it contains more information than genotype data. However, obtaining haplotype data is technically difficult and costly. Computational methods have proved to be an effective way of inferring haplotype data from genotype data. One of these methods, the haplotype inference by pure parsimony approach (HIPP), casts the problem as an optimization problem and as such has been proved to be NP-hard. We have designed and developed a new preprocessing procedure for this problem. Our proposed algorithm works with groups of haplotypes rather than individual haplotypes. It iterates searching and deleting haplotypes that are not helpful in order to find the optimal solution. This preprocess can be coupled with any of the current solvers for the HIPP that need to preprocess the genotype data. In order to test it, we have used two state-of-the-art solvers, RTIP and GAHAP, and simulated and real HapMap data. Due to the computational time and memory reduction caused by our preprocess, problem instances that were previously unaffordable can be now efficiently solved.  相似文献   

15.
Schuck P 《Biophysical journal》2000,78(3):1606-1619
A new method for the size-distribution analysis of polymers by sedimentation velocity analytical ultracentrifugation is described. It exploits the ability of Lamm equation modeling to discriminate between the spreading of the sedimentation boundary arising from sample heterogeneity and from diffusion. Finite element solutions of the Lamm equation for a large number of discrete noninteracting species are combined with maximum entropy regularization to represent a continuous size-distribution. As in the program CONTIN, the parameter governing the regularization constraint is adjusted by variance analysis to a predefined confidence level. Estimates of the partial specific volume and the frictional ratio of the macromolecules are used to calculate the diffusion coefficients, resulting in relatively high-resolution sedimentation coefficient distributions c(s) or molar mass distributions c(M). It can be applied to interference optical data that exhibit systematic noise components, and it does not require solution or solvent plateaus to be established. More details on the size-distribution can be obtained than from van Holde-Weischet analysis. The sensitivity to the values of the regularization parameter and to the shape parameters is explored with the help of simulated sedimentation data of discrete and continuous model size distributions, and by applications to experimental data of continuous and discrete protein mixtures.  相似文献   

16.
Anticipated hand movements of amputee subjects are considered difficult to classify using only Electromyogram (EMG) signals and machine learning techniques. For a long time, classifying such s-EMG signals have been considered as a non-linear problem, and the problem of signal sparsity has not been given detailed attention in a large set of action classes. For addressing these problems, this paper is proposing a linear-time classifier termed as Random Fourier Mapped Collaborative Representation with distance weighted Tikhonov regularization matrix (RFMCRT). RFMCRT attempts to tackle the non-linear problem via Random Fourier Features and sparsity issue with collaborative representation. The projection error of Random Fourier Features is reduced by projecting to the same dimension as the original feature space and later finding the collaborative representation, with an optional non-negative constraint (RFMNNCRT). The proposed two classifiers were tested with time-domain features computed from the EMG signals obtained from NINAPRO databases using a non-overlapping sliding window size of 256 ms. Due to the random nature of our proposed classifiers, this paper has computed the average and worst-case performance for 50 trials and compared them with other reported classifiers. The results show that RFMNNCRT (average case) outperformed state-of-the-art classifiers with the accuracy of 93.44% for intact subjects and 55.67% for amputee subjects. In the worst-case situation, RFMCRT achieves considerable performance for the same, with the reported accuracy of 91.55% and 50.27% respectively. Our proposed classifier guarantees acceptable levels of accuracy for large classes of hand movements and also maintains good computational efficiency in comparison to LDA and SVM.  相似文献   

17.
ABSTRACT: BACKGROUND: The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are typically classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. RESULTS: This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to the global minimum, reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. CONCLUSION: The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by BARON.  相似文献   

18.
During hyperthermia therapy it is desirable to know the entire temperature field in the treatment region. However, accurately inferring this field from the limited number of temperature measurements available is very difficult, and thus state and parameter estimation methods have been used to attempt to solve this inherently ill-posed problem. To compensate for this ill-posedness and to improve the accuracy of this method, Tikhonov regularization of order zero has been used to significantly improve the results of the estimation procedure. It is also shown that the accuracies of the temperature estimates depend upon the value of the regularization parameter, which has an optimal value that is dependent on the perfusion pattern and magnitude. In addition, the transient power-off time sampling period (i.e., the length of time over which transient data is collected and used) influences the accuracy of the estimates, and an optimal sampling period is shown to exist. The effects of additive measurement noise are also investigated, as are the effects of the initial guess of the perfusion values, and the effects of both symmetric and asymmetric blood perfusion patterns. Random perfusion patterns with noisy data are the most difficult cases to evaluate. The cases studied are not a comprehensive set, but continue to show the feasibility of using state and parameter estimation methods to reconstruct the entire temperature field.  相似文献   

19.
This paper presents a new clique partitioning (CP) model for the Group Technology (GT) problem. The new model, based on a novel 0/1 quadratic programming formulation, addresses multiple objectives in GT problems by drawing on production relationships to assign differing weights to machine/part pairs. The use of this model, which is readily solved by a basic tabu search heuristic, is illustrated by solving 36 standard test problems from the literature. The efficiency of our new CP model is further illustrated by solving three large scale problems whose linear programming relaxations are much too large to be solved by CPLEX. An analysis of the quality of the solutions produced along with comparisons made with other models and methods highlight both the attractiveness and robustness of the proposed method.  相似文献   

20.
The spatial variation of the extracellular action potentials (EAP) of a single neuron contains information about the size and location of the dominant current source of its action potential generator, which is typically in the vicinity of the soma. Using this dependence in reverse in a three-component realistic probe + brain + source model, we solved the inverse problem of characterizing the equivalent current source of an isolated neuron from the EAP data sampled by an extracellular probe at multiple independent recording locations. We used a dipole for the model source because there is extensive evidence it accurately captures the spatial roll-off of the EAP amplitude, and because, as we show, dipole localization, beyond a minimum cell-probe distance, is a more accurate alternative to approaches based on monopole source models. Dipole characterization is separable into a linear dipole moment optimization where the dipole location is fixed, and a second, nonlinear, global optimization of the source location. We solved the linear optimization on a discrete grid via the lead fields of the probe, which can be calculated for any realistic probe + brain model by the finite element method. The global source location was optimized by means of Tikhonov regularization that jointly minimizes model error and dipole size. The particular strategy chosen reflects the fact that the dipole model is used in the near field, in contrast to the typical prior applications of dipole models to EKG and EEG source analysis. We applied dipole localization to data collected with stepped tetrodes whose detailed geometry was measured via scanning electron microscopy. The optimal dipole could account for 96% of the power in the spatial variation of the EAP amplitude. Among various model error contributions to the residual, we address especially the error in probe geometry, and the extent to which it biases estimates of dipole parameters. This dipole characterization method can be applied to any recording technique that has the capabilities of taking multiple independent measurements of the same single units.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号