首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Pulsed-laser photoacoustics is a technique which measures photoinduced enthalpic and volumetric changes on the nano- and microsecond timescales. Analysis of photoacoustic data generally requires deconvolution for a sum of exponentials, a procedure which has been developed extensively in the field of time-resolved fluorescence decay. Initial efforts to adapt an iterative nonlinear least squares computer program, utilizing the Marquardt algorithm, from the fluorescence field to photoacoustics indicated that significant modifications were needed. The major problem arises from the wide range of transient decay times which must be addressed by the photoacoustic technique. We describe an alternative approach to numerical convolution with exponential decays, developed to overcome the problems. Instead of using an approximation method (Simpson's rule) for evaluating the convolution integral, we construct a continuous instrumental response function by quadratic fitting of the discrete data and evaluate the convolution integral directly, without approximations. The success and limitations of this quadratic-fit convolution program are then demonstrated using simulated data. Finally, the program is applied to the analysis of experimental data to compare the resolution capabilities of two commercially available transducers. The advantages of a broadband, heavily damped transducer are shown for a standard organic photochemical system, the quenching of the triplet state of benzophenone by 2,5-dimethyl-2,4-hexadiene.  相似文献   

2.
In this paper we consider cell cycle models for which the transition operator for the evolution of birth mass density is a simple, linear dynamical system with a stochastic perturbation. The convolution model for a birth mass distribution is presented. Density functions of birth mass and tail probabilities in n-th generation are calculated by a saddle-point approximation method. With these probabilities, representing the probability of exceeding an acceptable mass value, we have more control over pathological growth. A computer simulation is presented for cell proliferation in the age-dependent cell cycle model. The simulation takes into account the fact that the age-dependent model with a linear growth is a simple linear dynamical system with an additive stochastic perturbation. The simulated data as well as the experimental data (generation times for mouse L) are fitted by the proposed convolution model.  相似文献   

3.
Likelihood analysis for regression models with measurement errors in explanatory variables typically involves integrals that do not have a closed-form solution. In this case, numerical methods such as Gaussian quadrature are generally employed. However, when the dimension of the integral is large, these methods become computationally demanding or even unfeasible. This paper proposes the use of the Laplace approximation to deal with measurement error problems when the likelihood function involves high-dimensional integrals. The cases considered are generalized linear models with multiple covariates measured with error and generalized linear mixed models with measurement error in the covariates. The asymptotic order of the approximation and the asymptotic properties of the Laplace-based estimator for these models are derived. The method is illustrated using simulations and real-data analysis.  相似文献   

4.
An in-vivo experimental technique was employed to determine the linear and nonlinear characteristics of viscoelastic properties of the spinal cord of anesthetized cats. The stress relaxation and recovery curves were reproducible in a group of cat experiments. The data of linear viscoelastic properties were used to develop a power law model with Boltzmann's convolution integral. The model was capable of predicting a prolonged stress relaxation and recovery curve. For larger deformation, the results were quantified using a nonlinear analysis of viscoelastic response of the spinal cord under the uniaxial experiment.  相似文献   

5.
Considerable effort in instrument development has made possible detection of picosecond fluorescence lifetimes by time-correlated single-photon counting. In particular, efforts have been made to narrow markedly the instrument response function (IRF). Less attention has been paid to analytical methods, especially to problem of discretization of the convolution integral, on which the detection and quantification of short lifetimes critically depends. We show that better discretization methods can yield acceptable results for short lifetimes even with an IRF several times wider than necessary for the standard discretization based on linear approximation (LA). A general approach to discretization, also suitable for nonexponential models, is developed. The zero-time shift is explicitly included. Using simulations, we compared LA, quadratic, and cubic approximations. The latter two proved much better for detection of short lifetimes and, in that respect, they do not differ except when the zero-time shift exceeds two channels, when one can benefit from using the cubic approximation. We showed that for LA in some cases narrowing the IRF beyond FWHM = 150 ps is actually counterproductive. This is not so for quadratic and cubic approximations, which we recommend for general use.  相似文献   

6.
A Fourier method for the analysis of exponential decay curves.   总被引:23,自引:0,他引:23       下载免费PDF全文
A method based on the Fourier convolution theorem is developed for the analysis of data composed of random noise, plus an unknown constant "base line," plus a sum of (or an integral over a continuous spectrum of) exponential decay functions. The Fourier method's usual serious practical limitation of needing high accuracy data over a very wide range is eliminated by the introduction of convergence parameters and a Gaussian taper window. A computer program is described for the analysis of discrete spectra, where the data involves only a sum of exponentials. The program is completely automatic in that the only necessary inputs are the raw data (not necessarily in equal intervals of time); no potentially biased initial guesses concerning either the number or the values of the components are needed. The outputs include the number of components, the amplitudes and time constants together with their estimated errors, and a spectral plot of the solution. The limiting resolving power of the method is studied by analyzing a wide range of simulated two-, three-, and four-component data. The results seem to indicate that the method is applicable over a considerably wider range of conditions than nonlinear least squares or the method of moments.  相似文献   

7.
Stress relaxation (or equivalently creep) allows a large range of the relaxation (retardation) spectrum of materials to be examined, particularly at lower frequencies. However, higher frequency components of the relaxation curves (typically of the order of Hertz) are attenuated due to the finite time taken to strain the specimen. This higher frequency information can be recovered by deconvolution of the stress and strain during the loading period. This paper examines the use of three separate deconvolution techniques: numerical (Fourier) deconvolution, semi-analytical deconvolution using a theoretical form of the strain, and deconvolution by a linear approximation method. Both theoretical data (where the exact form of the relaxation function is known) and experimental data were used to assess the accuracy and applicability of the deconvolution methods. All of the deconvolution techniques produced a consistent improvement in the higher frequency data up to the frequencies of the order of Hertz, with the linear approximation method showing better resolution in high-frequency analysis of the theoretical data. When the different deconvolution techniques were applied to experimental data, similar results were found for all three deconvolution techniques. Deconvolution of the stress and strain during loading is a simple and practical method for the recovery of higher frequency data from stress-relaxation experiments.  相似文献   

8.
In many cases of bioanalytical measurement, calculation of large amounts of data, analysis of complex signal waveforms or signal speed can overwhelm the performance of microcontrollers, analog electronic circuits or even PCs. One method to obtain results in real time is to apply a digital signal processor (DSP) for the analysis or processing of measurement data. In this paper we show how DSP-supported multiplying and accumulating (MAC) operations, such as time/frequency transformation, pattern recognition by correlation, convolution or filter algorithms, can optimize the processing of bioanalytical data. Discrete integral calculations are applied to the acquisition of impedance values as part of multi-parametric sensor chips, to pH monitoring using light-addressable potentiometric sensors (LAPS) and to the analysis of rapidly changing signal shapes, such as action potentials of cultured neuronal networks, as examples of DSP capability.  相似文献   

9.
Z. Chen  A. Basarab  D. Kouamé 《IRBM》2018,39(1):26-34
The recently proposed framework of ultrasound compressive deconvolution offers the possibility of decreasing the acquired data while improving the image spatial resolution. By combining compressive sampling and image deconvolution, the direct model of compressive deconvolution combines random projections and 2D convolution with a spatially invariant point spread function. Considering the point spread function known, existing algorithms have shown the ability of this framework to reconstruct enhanced ultrasound images from compressed measurements by inverting the forward linear model. In this paper, we propose an extension of the previous approach for compressive blind deconvolution, whose aim is to jointly estimate the ultrasound image and the system point spread function. The performance of the method is evaluated on both simulated and in vivo ultrasound data.  相似文献   

10.
I Fontaine  I  M Bertrand    G Cloutier 《Biophysical journal》1999,77(5):2387-2399
A system-based model is proposed to describe and simulate the ultrasound signal backscattered by red blood cells (RBCs). The model is that of a space-invariant linear system that takes into consideration important biological tissue stochastic scattering properties as well as the characteristics of the ultrasound system. The formation of the ultrasound signal is described by a convolution integral involving a transducer transfer function, a scatterer prototype function, and a function representing the spatial arrangement of the scatterers. The RBCs are modeled as nonaggregating spherical scatterers, and the spatial distribution of the RBCs is determined using the Percus-Yevick packing factor. Computer simulations of the model are used to study the power backscattered by RBCs as a function of the hematocrit, the volume of the scatterers, and the frequency of the incident wave (2-500 MHz). Good agreement is obtained between the simulations and theoretical and experimental data for both Rayleigh and non-Rayleigh scattering conditions. In addition to these results, the renewal process theory is proposed to model the spatial arrangement of the scatterers. The study demonstrates that the system-based model is capable of accurately predicting important characteristics of the ultrasound signal backscattered by blood. The model is simple and flexible, and it appears to be superior to previous one- and two-dimensional simulation studies.  相似文献   

11.
Maps depicting cancer incidence rates have become useful tools in public health research, giving valuable information about the spatial variation in rates of disease. Typically, these maps are generated using count data aggregated over areas such as counties or census blocks. However, with the proliferation of geographic information systems and related databases, it is becoming easier to obtain exact spatial locations for the cancer cases and suitable control subjects. The use of such point data allows us to adjust for individual-level covariates, such as age and smoking status, when estimating the spatial variation in disease risk. Unfortunately, such covariate information is often subject to missingness. We propose a method for mapping cancer risk when covariates are not completely observed. We model these data using a logistic generalized additive model. Estimates of the linear and non-linear effects are obtained using a mixed effects model representation. We develop an EM algorithm to account for missing data and the random effects. Since the expectation step involves an intractable integral, we estimate the E-step with a Laplace approximation. This framework provides a general method for handling missing covariate values when fitting generalized additive models. We illustrate our method through an analysis of cancer incidence data from Cape Cod, Massachusetts. These analyses demonstrate that standard complete-case methods can yield biased estimates of the spatial variation of cancer risk.  相似文献   

12.
This study represents the analysis of the data available in the literature and the author's findings concerning the issue of a shape of the dose stochastic effect curve in the range of low levels of radiation (LLR). The data obtained from radioepidemiological and experimental investigations are used. Also considered are the arguments "pro" and "contra" regarding approximation of these curves by means of a linear function (linear non-threshold conception) or as a quasi-plateau (threshold conception). The above analysis allows us to conclude that the threshold conception is more reliable than the non-threshold one from the standpoint of the analysis of postulate bases, theoretical paradigms, the mechanisms for radiobiological effects, epidemiological and experimental data. It is suggested that a separate radiogenic cancer risk estimation should be used in case of LLR and high level radiation instead of one overall estimation by means of the linear non-threshold model.  相似文献   

13.
Summary Two methods of deriving linear selection indices for non-linear profit functions have been proposed. One is by linear approximation of profit, and another is the graphical method of Moav and Hill (1966). When profit is defined as the function of population means, the graphical method is optimal. In this paper, profit is defined as the function of the phenotypic values of individual animals; it is then shown that the graphical method is not generally optimal. We propose new methods for constructing selection indices. First, a numerical method equivalent to the graphical method is proposed. Furthermore, we propose two other methods using quadratic approximation of profit: one is based on Taylor series about means before selection, and the other is based on Tayler series about means after selection. Among these different methods, it is shown that the method using quadratic approximation based on Taylor series about means after selection is the most efficient.  相似文献   

14.
Distribution-free regression analysis of grouped survival data   总被引:1,自引:0,他引:1  
Methods based on regression models for logarithmic hazard functions, Cox models, are given for analysis of grouped and censored survival data. By making an approximation it is possible to obtain explicitly a maximum likelihood function involving only the regression parameters. This likelihood function is a convenient analog to Cox's partial likelihood for ungrouped data. The method is applied to data from a toxicological experiment.  相似文献   

15.
The vestibular evoked myogenic potential (VEMP) can be modeled (scaling factors aside) as a convolution of the motor unit action potential (MUAP) of a representative motor unit, h(t), with the temporal modulation of the MUAP rate of all contributing motor units, r(t). Accordingly, the variance modulation associated with the VEMP can be modeled as a convolution of r(t) with the square of h(t). To get a deeper theoretical understanding of the VEMP phenomenon, a specific realization of this general model is investigated here. Both r(t) and h(t) were derived from a Gaussian probability density function (in the latter case taking the first derivative). The resulting model turned out to be simple enough to be evaluated analytically in the time and in the frequency domain, while still being realistic enough to account for the basic aspects of the VEMP generation. Perhaps the most significant conclusion of this study is that, in the case of noisy data, it may be difficult to falsify the hypothesis of a rate modulation of infinitesimal duration. Thus, certain aspects of the data (particularly the peak amplitudes) can be interpreted using a short-modulation approximation rather than the general model. The importance of this realization arises from the fact that the approximation offers an exceptionally simple and convenient way for a model-based interpretation of experimental data, whereas any attempt to use the general model for that purpose would result in an ill-posed inverse problem that is far from easy to solve.  相似文献   

16.
The usefulness of fluorescence techniques for the study of macromolecular structure and dynamics depends on the accuracy and sensitivity of the methods used for data analysis. Many methods for data analysis have been proposed and used, but little attention has been paid to the maximum likelihood method, generally known as the most powerful statistical method for parameter estimation. In this paper we study the properties and behavior of maximum likelihood estimates by using simulated fluorescence intensity decay data. We show that the maximum likelihood method provides generally more accurate estimates of lifetimes and fractions than does the standard least-squares approach especially when the lifetime ratios between individual components are small. Three novelties to the field of fluorescence decay analysis are also introduced and studied in this paper: a) discretization of the convolution integral based on the generalized integral mean value theorem: b) the likelihood ratio test as a tool to determine the number of exponential decay components in a given decay profile; and c) separability and detectability indices which provide measures on how accurately, a particular decay component can be detected. Based on the experience gained from this and from our previous study of the Padé-Laplace method, we make some recommendations on how the complex problem of deconvolution and parameter estimation of multiexponential functions might be approached in an experimental setting. Offprint requests to: F. G. Prendergast  相似文献   

17.
The Ornstein-Uhlenbeck process with a constant forcing function has often been used as a model for the subthreshold membrane potential of a neuron. The mean, variance and coefficient of variation of the first passage time to a constant threshold are examined for this model in the limit of small synaptic noise and low thresholds. A comparison is made between the asymptotic results of Wan & Tuckwell, who used perturbation analysis, and several computationally simpler approximation methods. A generalization of Stein's method gives an overestimate of the mean interval while an approximation by a Wiener process with linear drift gives an underestimate of the mean interval. These bounds are simple to calculate and can be used as a prelude to a more detailed perturbation analysis.  相似文献   

18.
The need to solve linear and nonlinear integral equations arise, e.g., in recovering plasma parameters from the data of multichannel diagnostics. The paper presents an iterative method for solving integral equations with a singularity at the upper limit of integration. The method consists in constructing successive approximations and calculating the integral by quadrature formulas in each integration interval. An example of application of the iterative algorithm to numerically solve an integral equation similar to those arising in recovering the plasma density profile from reflectometry data is presented.  相似文献   

19.
The leaky integrate-and-fire model for neuronal spiking events driven by a periodic stimulus is studied by using the Fokker-Planck formulation. To this purpose, an essential use is made of the asymptotic behavior of the first-passage-time probability density function of a time homogeneous diffusion process through an asymptotically periodic threshold. Numerical comparisons with some recently published results derived by a different approach are performed. Use of a new asymptotic approximation is then made in order to design a numerical algorithm of predictor-corrector type to solve the integral equation in the unknown first-passage-time probability density function. Such algorithm, characterized by a reduced (linear) computation time, is seen to provide a high computation accuracy. Finally, it is shown that such an approach yields excellent approximations to the firing probability density function for a wide range of parameters, including the case of high stimulus frequencies.  相似文献   

20.
Fitting piecewise linear regression functions to biological responses   总被引:2,自引:0,他引:2  
An iterative approach was achieved for fitting piecewise linear functions to nonrectilinear responses of biological variables. This algorithm is used to estimate the parameters of the two (or more) regression functions and the separation point(s) (thresholds, sensitivities) by statistical approximation. Although it is often unknown whether the response of a biological variable is adequately described by one rectilinear regression function or by piecewise linear regression function(s) with separation point(s), an F test is proposed to determine whether one regression line is the optimal fitted function. A FORTRAN-77 program has been developed for estimating the optimal parameters and the coordinates of the separation point(s). A few sets of data illustrating this kind of problem in the analysis of thermoregulation, osmoregulation, and the neuronal responses are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号