首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 922 毫秒
1.
In this paper, we have applied an efficient wavelet-based approximation method for solving the Fisher’s type and the fractional Fisher’s type equations arising in biological sciences. To the best of our knowledge, until now there is no rigorous wavelet solution has been addressed for the Fisher’s and fractional Fisher’s equations. The highest derivative in the differential equation is expanded into Legendre series; this approximation is integrated while the boundary conditions are applied using integration constants. With the help of Legendre wavelets operational matrices, the Fisher’s equation and the fractional Fisher’s equation are converted into a system of algebraic equations. Block-pulse functions are used to investigate the Legendre wavelets coefficient vectors of nonlinear terms. The convergence of the proposed methods is proved. Finally, we have given some numerical examples to demonstrate the validity and applicability of the method.  相似文献   

2.
A procedure is presented for constructing an exact confidence interval for the ratio of the two variance components in a possibly unbalanced mixed linear model that contains a single set of m random effects. This procedure can be used in animal and plant breeding problems to obtain an exact confidence interval for a heritability. The confidence interval can be defined in terms of the output of a least squares analysis. It can be computed by a graphical or iterative technique requiring the diagonalization of an m X m matrix or, alternatively, the inversion of a number of m X m matrices. Confidence intervals that are approximate can be obtained with much less computational burden, using either of two approaches. The various confidence interval procedures can be extended to some problems in which the mixed linear model contains more than one set of random effects. Corresponding to each interval procedure is a significance test and one or more estimators.  相似文献   

3.
The origins of Fisher information are in its use as a performance measure for parametric estimation. We augment this and show that the Fisher information can characterize the performance in several other significant signal processing operations. For processing of a weak signal in additive white noise, we demonstrate that the Fisher information determines (i) the maximum output signal-to-noise ratio for a periodic signal; (ii) the optimum asymptotic efficacy for signal detection; (iii) the best cross-correlation coefficient for signal transmission; and (iv) the minimum mean square error of an unbiased estimator. This unifying picture, via inequalities on the Fisher information, is used to establish conditions where improvement by noise through stochastic resonance is feasible or not.  相似文献   

4.
 The idea that a sparse representation is the computational principle of visual systems has been supported by Olshausen and Field [Nature (1996) 381: 607–609] and many other studies. On the other hand neurons in the inferotemporal cortex respond to moderately complex features called icon alphabets, and such neurons respond invariantly to the stimulus position. To incorporate this property into sparse representation, an algorithm is proposed that trains basis functions using sparse representations with shift invariance. Shift invariance means that basis functions are allowed to move on image data and that coefficients are equipped with shift invariance. The algorithm is applied to natural images. It is ascertained that moderately complex graphical features emerge that are not as simple as Gabor filters and not as complex as real objects. Shift invariance and moderately complex features correspond to the property of icon alphabets. The results show that there is another connection between visual information processing and sparse representations. Received: 3 November 1999 / Accepted in revised form: 17 February 2000  相似文献   

5.
We calculate and analyze the information capacity-achieving conditions and their approximations in a simple neuronal system. The input–output properties of individual neurons are described by an empirical stimulus–response relationship and the metabolic cost of neuronal activity is taken into account. The exact (numerical) results are compared with a popular “low-noise” approximation method which employs the concepts of parameter estimation theory. We show, that the approximate method gives reliable results only in the case of significantly low response variability. By employing specialized numerical procedures we demonstrate, that optimal information transfer can be near-achieved by a number of different input distributions. It implies that the precise structure of the capacity-achieving input is of lesser importance than the value of capacity. Finally, we illustrate on an example that an innocuously looking stimulus–response relationship may lead to a problematic interpretation of the obtained Fisher information values.  相似文献   

6.
Friedl H  Kauermann G 《Biometrics》2000,56(3):761-767
A procedure is derived for computing standard errors of EM estimates in generalized linear models with random effects. Quadrature formulas are used to approximate the integrals in the EM algorithm, where two different approaches are pursued, i.e., Gauss-Hermite quadrature in the case of Gaussian random effects and nonparametric maximum likelihood estimation for an unspecified random effect distribution. An approximation of the expected Fisher information matrix is derived from an expansion of the EM estimating equations. This allows for inferential arguments based on EM estimates, as demonstrated by an example and simulations.  相似文献   

7.
Cognitive processes such as decision-making, rate calculation and planning require an accurate estimation of durations in the supra-second range—interval timing. In addition to being accurate, interval timing is scale invariant: the time-estimation errors are proportional to the estimated duration. The origin and mechanisms of this fundamental property are unknown. We discuss the computational properties of a circuit consisting of a large number of (input) neural oscillators projecting on a small number of (output) coincidence detector neurons, which allows time to be coded by the pattern of coincidental activation of its inputs. We showed analytically and checked numerically that time-scale invariance emerges from the neural noise. In particular, we found that errors or noise during storing or retrieving information regarding the memorized criterion time produce symmetric, Gaussian-like output whose width increases linearly with the criterion time. In contrast, frequency variability produces an asymmetric, long-tailed Gaussian-like output, that also obeys scale invariant property. In this architecture, time-scale invariance depends neither on the details of the input population, nor on the distribution probability of noise.  相似文献   

8.
For some applications of the WILCOXON-MANN-WHITNEY-statistic its variance has to be estimated. So e.g. for the test of POTTHOFF (1963) to detect differences in medians of two symmetric distributions as well as for the computation of approximate, confidence bounds for the probability P(X1X2), cf. GOVINDARAJULU (1968). In the present paper an easy to compute variance estimator is proposed which as only information uses the ranks of the data with the additional property that it is unbiased for the finite variance. Because of its invariance under any monotone transformation of the data its applicability is not confined to quantitative data. The estimator may be applied to ordinal data just as well. Some properties are discussed and a numerical example is given.  相似文献   

9.
In order to control visually-guided voluntary movements, the central nervous system (CNS) must solve the following three computational problems at different levels: (1) determination of a desired trajectory in the visual coordinates, (2) transformation of the coordinates of the desired trajectory to the body coordinates and (3) generation of motor command. In this paper, the second and the third problems are treated at computational, representational and hardware levels of Marr. We first study the problems at the computational level, and then propose an iterative learning scheme as a possible algorithm. This is a trial and error type learning such as repetitive training of golf swing. The amount of motor command needed to coordinate activities of many muscles is not determined at once, but in a step-wise, trial and error fashion in the course of a set of repetitions. Actually, the motor command in the (n+1)-th iteration is a sum of the motor command in then-th iteration plus two modification terms which are, respectively, proportional to acceleration and speed errors between the desired trajectory and the realized trajectory in then-th iteration. We mathematically formulate this iterative learning control as a Newton-like method in functional spaces and prove its convergence under appropriate mathematical conditions with use of dynamical system theory and functional analysis. Computer simulations of this iterative learning control of a robotic manipulator in the body or visual coordinates are shown. Finally, we propose that areas 2, 5, and 7 of the sensory association cortex are possible sites of this learning control. Further we propose neural network model which acquires transformation matrices from acceleration or velocity to motor command, which are used in these schemes.  相似文献   

10.
11.
Parameter expanded and standard expectation maximisation algorithms are described for reduced rank estimation of covariance matrices by restricted maximum likelihood, fitting the leading principal components only. Convergence behaviour of these algorithms is examined for several examples and contrasted to that of the average information algorithm, and implications for practical analyses are discussed. It is shown that expectation maximisation type algorithms are readily adapted to reduced rank estimation and converge reliably. However, as is well known for the full rank case, the convergence is linear and thus slow. Hence, these algorithms are most useful in combination with the quadratically convergent average information algorithm, in particular in the initial stages of an iterative solution scheme.  相似文献   

12.
Surface electromyographic signals provide useful information about motion intentionality. Therefore, they are a suitable reference signal for control purposes. A continuous classification scheme of five upper limb movements applied to a myoelectric control of a robotic arm is presented. This classification is based on features extracted from the bispectrum of four EMG signal channels. Among several bispectrum estimators, this paper is focused on arithmetic mean, median, and trimmed mean estimators, and their ensemble average versions. All bispectrum estimators have been evaluated in terms of accuracy, robustness against outliers, and computational time. The median bispectrum estimator shows low variance and high robustness properties. Two feature reduction methods for the complex bispectrum matrix are proposed. The first one estimates the three classic means (arithmetic, harmonic, and geometric means) from the module of the bispectrum matrix, and the second one estimates the same three means from the square of the real part of the bispectrum matrix. A two-layer feedforward network for movement's classification and a dedicated system to achieve the myoelectric control of a robotic arm were used. It was found that the classification performance in real-time is similar to those obtained off-line by other authors, and that all volunteers in the practical application successfully completed the control task.  相似文献   

13.
A novel method to accomplish efficient numerical simulation of metabolic networks for flux analysis was developed. The only inputs required are the set of stoichiometric balances and the atom mapping matrices of all components of the reaction network. The latter are used to automatically calculate isotopomer mapping matrices. Using the symbolic toolbox of MATLAB the analytical solution of the stoichiometric balance equation system, isotopomer balances and the analytical Jacobian matrix of the total set of stoichiometric and isotopomer balances are created automatically. The number of variables in the isotopomer distribution equation system is significantly reduced applying modified isotopomer mapping matrices. These allow lumping of several consecutive isotopomer reactions into a single one. The solution of the complete system of equations is improved by implementing an iterative logical loop algorithm and using the analytical Jacobian matrix. This new method provided quick and robust convergence to the root of such equation systems in all cases tested. The method was applied to a network of lysine producing Corynebacterium glutamicum. The resulting equation system with the dimension of 546 x 546 was directly derived from 12 isotopomer balance equations. The results obtained yielded identical labeling patterns for metabolites as compared to the relaxation method.  相似文献   

14.
In the theory of belief functions, the approximation of a basic belief assignment (BBA) is for reducing the high computational cost especially when large number of focal elements are available. In traditional BBA approximation approaches, a focal element’s own characteristics such as the mass assignment and the cardinality, are usually used separately or jointly as criteria for the removal of focal elements. Besides the computational cost, the distance between the original BBA and the approximated one is also concerned, which represents the loss of information in BBA approximation. In this paper, an iterative approximation approach is proposed based on maximizing the closeness, i.e., minimizing the distance between the approximated BBA in current iteration and the BBA obtained in the previous iteration, where one focal element is removed in each iteration. The iteration stops when the desired number of focal elements is reached. The performance evaluation approaches for BBA approximations are also discussed and used to compare and evaluate traditional BBA approximations and the newly proposed one in this paper, which include traditional time-based way, closeness-based way and new proposed ones. Experimental results and related analyses are provided to show the rationality and efficiency of our proposed new BBA approximation.  相似文献   

15.
Understanding the mechanics of the aortic valve has been a focus of attention for many years in the biomechanics literature, with the aim of improving the longevity of prosthetic replacements. Finite element models have been extensively used to investigate stresses and deformations in the valve in considerable detail. However, the effect of uncertainties in loading, material properties and model dimensions has remained uninvestigated. This paper presents a formal statistical consideration of a selected set of uncertainties on a fluid-driven finite element model of the aortic valve and examines the magnitudes of the resulting output uncertainties. Furthermore, the importance of each parameter is investigated by means of a global sensitivity analysis. To reduce computational cost, a Bayesian emulator-based approach is adopted whereby a Gaussian process is fitted to a small set of training data and then used to infer detailed sensitivity analysis information. From the set of uncertain parameters considered, it was found that output standard deviations were as high as 44% of the mean. It was also found that the material properties of the sinus and aorta were considerably more important in determining leaflet stress than the material properties of the leaflets themselves.  相似文献   

16.
Low-frequency collective motions in proteins are generally very important for their biological functions. To study such motions, harmonic dynamics proved most useful since it is a straightforward method; it consists of the diagonalization of the Hessian matrix of the potential energy, yielding the vibrational spectrum and the directions of internal motions. Unfortunately, the diagonalization of this matrix requires a large computer memory, which is a limiting factor when the protein contains several thousand atoms. To circumvent this limitation we have developed three methods that enable us to diagonalize large matrices using much less computer memory than the usual harmonic dynamics. The first method is approximate; it consists of diagonalizing small blocks of the Hessian matrix, followed by the coupling of the low-frequency modes obtained for each block. It yields the low-frequency vibrational spectrum with a maximum error of 20%. The second method consists, after diagonalizing small blocks, of coupling the high- and low-frequency modes using an iterative procedure. It yields the exact low-frequency normal modes, but requires a long computational time with convergence problems. The third method, DIMB (Diagonalization in a Mixed Basis), which has the best performance, consists of coupling the approximate low-frequency modes with the mass-weighted cartesian coordinates, also using an iterative procedure. It reduces significantly the required computer memory and converges rapidly. The eigenvalues and eigenvectors obtained by this method are without significant error in the chosen frequency range. Moreover, it is a general method applicable to any problem of diagonalization of a large matrix. We report the application of these methods to a deca-alanine helix, trypsin inhibitor, a neurotoxin, and lysozyme. © 1993 John Wiley & Sons, Inc.  相似文献   

17.
The emerging field of computational morphodynamics aims to understand the changes that occur in space and time during development by combining three technical strategies: live imaging to observe development as it happens; image processing and analysis to extract quantitative information; and computational modelling to express and test time-dependent hypotheses. The strength of the field comes from the iterative and combined use of these techniques, which has provided important insights into plant development.  相似文献   

18.
A model of population growth is studied in which the Leslie matrix for each time interval is chosen according to a Markov process. It is shown analytically that the distribution of total population number is lognormal at long times. Measures of population growth are compared and it is shown that a mean logarithmic growth rate and a logarithmic variance effectively describe growth and extinction at long times. Numerical simulations are used to explore the convergence to lognormality and the effects of environmental variance and autocorrelation. The results given apply to other geometric growth models which involve nonnegative growth matrices.  相似文献   

19.
Summary A desirable genotype is a genotype performing well in a chosen set of environments. Three methods for identification of desirable genotypes were assessed in two cabbage data sets: regression analysis, multidimensional scaling of dissimilarity matrices, and biplot of deviation matrices. Using the regression approach is not recommended mainly for two reasons: (1) it is difficult to identify the desirable genotypes since one has to unify three parameters into one decision; (2) the regression method failed to identify the most desirable genotypes in one of the data sets. Multidimensional scaling and the biplot method were in accordance with each other and with the mean tables when different subsets where compared. Consequently, they were considered more adequate for identifying desirable genotypes. In cases where rank 2 approximation of the analysed matrix was justified, the biplot revealed more information in one display and was, therefore, considered particularly useful in plant breeding for larger target areas.  相似文献   

20.
MOTIVATION: Geometric representations of proteins and ligands, including atom volumes, atom-atom contacts and solvent accessible surfaces, can be used to characterize interactions between and within proteins, ligands and solvent. Voronoi algorithms permit quantification of these properties by dividing structures into cells with a one-to-one correspondence with constituent atoms. As there is no generally accepted measure of atom-atom contacts, a continuous analytical representation of inter-atomic contacts will be useful. Improved geometric algorithms will also be helpful in increasing the speed and accuracy of iterative modeling algorithms. RESULTS: We present computational methods based on the Voronoi procedure that provide rapid and exact solutions to solvent accessible surfaces, volumes, and atom contacts within macromolecules. Furthermore, we define a measure of atom-atom contact that is consistent with the calculation of solvent accessible surfaces, allowing the integration of solvent accessibility and inter-atomic contacts into a continuous measure. The speed and accuracy of the algorithm is compared to existing methods for calculating solvent accessible surfaces and volumes. The presented algorithm has a reduced execution time and greater accuracy compared to numerical and approximate analytical surface calculation algorithms, and a reduced execution time and similar accuracy to existing Voronoi procedures for calculating atomic surfaces and volumes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号