首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Several researchers propose that event-related potentials (ERPs) can be explained by a superposition of transient oscillations at certain frequency bands in response to external or internal events. The transient nature of the ERP is more suitable to be modelled as a sum of damped sinusoids. These damped sinusoids can be completely characterized by four sets of parameters, namely the amplitude, the damping coefficient, the phase and the frequency. The Prony method is used to estimate these parameters. In this study, the long-latency auditory-evoked potentials (AEP) and the auditory oddball responses (P300) of 10 healthy subjects are analysed by this method. It is shown that the original waveforms can be reconstructed by summing a small number of damped sinusoids. This allows for a parsimonious representation of the ERPs. Furthermore, the method shows that the oddball target responses contain higher amplitude, slower delta and slower damped theta components than those of the AEPs. With this technique, we show that the differentiation of sensory and cognitive potentials are not inherent in their overall frequency content but in their frequency components at certain bands. Received: 6 October 1997 / Accepted in revised form: 26 February 1998  相似文献   

2.
Two related procedures for estimating the parameters of steady-state evoked potentials (SSEPs) are introduced. The first procedure involves an initial stage of digital bandpass filtering followed by a Discrete Fourier Transform analysis. In the second method, a high resolution method based on parametric modelling is applied to the filtered data. The digital pre-filter consists of a non-phase shifting Chebychev bandpass filter. The parametric modelling method considers the evoked-response-plus-noise distribution to consist of a set of exponentially damped sinusoids. The frequency, amplitude, phase and damping factors of these components are estimated by calculating the mean of the forward and backward prediction filters and linear regression.We compared the signal-to-noise ratio (SNR) of the new procedures to the conventional Discrete Fourier Transform method for Monte Carlo simulations utilizing known sinusoids buried in white noise, known sinusoids buried in human EEG noise and for a sample of visual evoked potential data. Both of the new methods produce substantially more accurate and less variable estimates of test sinusoid amplitude. For VEP recording, the EEG background noise level is reduced by 5–6 dB over that obtained with the DFT. The new methods also provide approximately 5 dB better SNR than the DFT for detection of sinusoids based on the Rayleigh statistic. The parametric modelling approach is particularly suited for the analysis of very short data records including cycle-by-cycle analysis of the SSEP.  相似文献   

3.
The fate of scientific hypotheses often relies on the ability of a computational model to explain the data, quantified in modern statistical approaches by the likelihood function. The log-likelihood is the key element for parameter estimation and model evaluation. However, the log-likelihood of complex models in fields such as computational biology and neuroscience is often intractable to compute analytically or numerically. In those cases, researchers can often only estimate the log-likelihood by comparing observed data with synthetic observations generated by model simulations. Standard techniques to approximate the likelihood via simulation either use summary statistics of the data or are at risk of producing substantial biases in the estimate. Here, we explore another method, inverse binomial sampling (IBS), which can estimate the log-likelihood of an entire data set efficiently and without bias. For each observation, IBS draws samples from the simulator model until one matches the observation. The log-likelihood estimate is then a function of the number of samples drawn. The variance of this estimator is uniformly bounded, achieves the minimum variance for an unbiased estimator, and we can compute calibrated estimates of the variance. We provide theoretical arguments in favor of IBS and an empirical assessment of the method for maximum-likelihood estimation with simulation-based models. As case studies, we take three model-fitting problems of increasing complexity from computational and cognitive neuroscience. In all problems, IBS generally produces lower error in the estimated parameters and maximum log-likelihood values than alternative sampling methods with the same average number of samples. Our results demonstrate the potential of IBS as a practical, robust, and easy to implement method for log-likelihood evaluation when exact techniques are not available.  相似文献   

4.
We have developed a method to estimate foveal visual acuity (VA) through analysis of VEPs. It consists in determining the smallest check size in a pattern reversal that elicits a significant cortical response. The VEP is regarded as significant if the P 100 amplitude reaches a pre-established level in the signal to noise ratio. A valid criterion to determine normal VEP-VA emerges from the testing of 84 emmetropic and ametropic eyes: within our stimulation and recording conditions, a significant VEP response to the reversal of seven minutes checks corresponds to normal foveal acuity. This criterion has also proved pertinent to discriminate between normal VAs of 20/20 and decreased VAs (20/40 or less) with four other groups of subjects: 14 adult eyes whose VAs of 20/20 are decreased through Bangerter occlusives, 32 emmetropic and ametropic eyes belonging to five-years-olds children, 28 emmetropic and ametropic eyes of twelve-years-olds. In order to guarantee the validity of our results we carried out a double-blind study with ophthalmologists. The relevance of the method we suggest is related to that of the method which consists in extrapolating the regression line between VEP amplitudes and the pattern element sizes. At least we have aimed at establishing VEP norms for the maturation of VA. We have collected data from the following subjects: 5 infants tested monthly between 1 and 6 months, 31 infants ranging in age from 1 to 16 months, 10 five-year-old children, 13 twelve-year-olds, and 11 subjects aged 20. Within our stimulation and recording conditions a significant evoked response to the reversal of seven minute checks can be observed from 8 months onward. With an eight-month-old infant this response to the reversal of seven minute checks cannot be identified to the same response with an adult. There are two major differences: the latency of the major positive component is longer, and the structure of the evoked response consists of fewer components.  相似文献   

5.
Summary A computational method is described that takes an initial estimate of the chemical shifts, line widths and scalar coupling constants for the protons in a molecule, and refines this estimate so as to improve the least-squares fit between an experimental COSY spectrum and the spectrum simulated from these parameters in the weak-coupling approximation. In order to evaluate the potential of such refinements for estimating these parameters from COSY experiments, the method has been applied to a large number of sample problems which were themselves simulated from standard conformations of the amino acids, along with 25 near-native conformations of the protein bovine pancreatic trypsin inhibitor. The results of this evaluation show that: (i) if the chemical shifts are known to within ca. 0.01 ppm and no noise or artifacts are present in the data, the method is capable of recovering the correct coupling constants, starting from essentially arbitrary values, to within 0.1 Hz in almost all cases. (ii) Although the precision of these estimates of the coupling constants is degraded by the limited resolution, noise and artifacts present in most experimental spectra, the large majority of coupling constants can still be recovered to within 1.0 Hz; the local minimum problem is not made significantly worse by such defects in the data. (iii) The method assigns an effective line width to all the resonances, and in the process can resolve overlapping cross peaks. (iv) The method is not capable of determining the chemical shifts a priori, due to the presence of numerous local minima in the least-squares residual as a function of these parameters.  相似文献   

6.
Anderson EC 《Genetics》2005,170(2):955-967
This article presents an efficient importance-sampling method for computing the likelihood of the effective size of a population under the coalescent model of Berthier et al. Previous computational approaches, using Markov chain Monte Carlo, required many minutes to several hours to analyze small data sets. The approach presented here is orders of magnitude faster and can provide an approximation to the likelihood curve, even for large data sets, in a matter of seconds. Additionally, confidence intervals on the estimated likelihood curve provide a useful estimate of the Monte Carlo error. Simulations show the importance sampling to be stable across a wide range of scenarios and show that the N(e) estimator itself performs well. Further simulations show that the 95% confidence intervals around the N(e) estimate are accurate. User-friendly software implementing the algorithm for Mac, Windows, and Unix/Linux is available for download. Applications of this computational framework to other problems are discussed.  相似文献   

7.
MR fingerprinting (MRF) is an innovative approach to quantitative MRI. A typical disadvantage of dictionary-based MRF is the explosive growth of the dictionary as a function of the number of reconstructed parameters, an instance of the curse of dimensionality, which determines an explosion of resource requirements. In this work, we describe a deep learning approach for MRF parameter map reconstruction using a fully connected architecture. Employing simulations, we have investigated how the performance of the Neural Networks (NN) approach scales with the number of parameters to be retrieved, compared to the standard dictionary approach. We have also studied optimal training procedures by comparing different strategies for noise addition and parameter space sampling, to achieve better accuracy and robustness to noise. Four MRF sequences were considered: IR-FISP, bSSFP, IR-FISP-B1, and IR-bSSFP-B1. A comparison between NN and the dictionary approaches in reconstructing parameter maps as a function of the number of parameters to be retrieved was performed using a numerical brain phantom. Results demonstrated that training with random sampling and different levels of noise variance yielded the best performance. NN performance was at least as good as the dictionary-based approach in reconstructing parameter maps using Gaussian noise as a source of artifacts: the difference in performance increased with the number of estimated parameters because the dictionary method suffers from the coarse resolution of the parameter space sampling. The NN proved to be more efficient in memory usage and computational burden, and has great potential for solving large-scale MRF problems.  相似文献   

8.
Pre-exposure prophylaxis (PrEP) is an important pillar to prevent HIV transmission. Because of experimental and clinical shortcomings, mathematical models that integrate pharmacological, viral- and host factors are frequently used to quantify clinical efficacy of PrEP. Stochastic simulations of these models provides sample statistics from which the clinical efficacy is approximated. However, many stochastic simulations are needed to reduce the associated sampling error. To remedy the shortcomings of stochastic simulation, we developed a numerical method that allows predicting the efficacy of arbitrary prophylactic regimen directly from a viral dynamics model, without sampling. We apply the method to various hypothetical dolutegravir (DTG) prophylaxis scenarios. The approach is verified against state-of-the-art stochastic simulation. While the method is more accurate than stochastic simulation, it is superior in terms of computational performance. For example, a continuous 6-month prophylactic profile is computed within a few seconds on a laptop computer. The method’s computational performance, therefore, substantially expands the horizon of feasible analysis in the context of PrEP, and possibly other applications.  相似文献   

9.
An important issue in the phylogenetic analysis of nucleotide sequence data using the maximum likelihood (ML) method is the underlying evolutionary model employed. We consider the problem of simultaneously estimating the tree topology and the parameters in the underlying substitution model and of obtaining estimates of the standard errors of these parameter estimates. Given a fixed tree topology and corresponding set of branch lengths, the ML estimates of standard evolutionary model parameters are asymptotically efficient, in the sense that their joint distribution is asymptotically normal with the variance–covariance matrix given by the inverse of the Fisher information matrix. We propose a new estimate of this conditional variance based on estimation of the expected information using a Monte Carlo sampling (MCS) method. Simulations are used to compare this conditional variance estimate to the standard technique of using the observed information under a variety of experimental conditions. In the case in which one wishes to estimate simultaneously the tree and parameters, we provide a bootstrapping approach that can be used in conjunction with the MCS method to estimate the unconditional standard error. The methods developed are applied to a real data set consisting of 30 papillomavirus sequences. This overall method is easily incorporated into standard bootstrapping procedures to allow for proper variance estimation.  相似文献   

10.
A two-point maximum entropy method (TPMEM) was investigated for post-acquisition signal recovery in magnetoencephalography (MEG) data, as a potential replacement of a low-pass (LP) filtering technique currently in use. We first applied TPMEM and the LP filter for signal recovery of synthetically noise corrupted MEG “phantom” data sets in which the true underlying signal was known. Results were quantified with the use of visual plots, percent error histograms, and the statistical parameters root mean squared error and Pearson’s correlation coefficient. Synthetically noise corrupted data from a simulated magnetic dipole was used to quantify the improvements gained in using TPMEM over LP filters in reconstructing known dipole parameters such as position, orientation, and magnitude. Finally, we applied TPMEM and LP filters to a sample MEG patient data set. Our results show that TPMEM has improved noise-reduction and signal recovery capabilities than those of the LP filter, and furthermore data processed with TPMEM shows less error in the reconstructed dipole parameters. We propose that TPMEM can be used for MEG signal processing, resulting in improved MEG source characterization.  相似文献   

11.
We present a new algorithm to estimate hemodynamic response function (HRF) and drift components of fMRI data in wavelet domain. The HRF is modeled by both parametric and nonparametric models. The functional Magnetic resonance Image (fMRI) noise is modeled as a fractional brownian motion (fBm). The HRF parameters are estimated in wavelet domain by exploiting the property that wavelet transforms with a sufficient number of vanishing moments decorrelates a fBm process. Using this property, the noise covariance matrix in wavelet domain can be assumed to be diagonal whose entries are estimated using the sample variance estimator at each scale. We study the influence of the sampling rate of fMRI time series and shape assumption of HRF on the estimation performance. Results are presented by adding synthetic HRFs on simulated and null fMRI data. We also compare these methods with an existing method,(1) where correlated fMRI noise is modeled by a second order polynomial functions.  相似文献   

12.
Little is known about the impact of the total cavopulmonary connection (TCPC) on resting and exercise hemodynamics in a single ventricle (SV) circulation. The aim of this study was to elucidate this mechanism using a lumped parameter model of the SV circulation. Pulmonary vascular resistance (1.96+/-0.80 WU) and systemic vascular resistances (18.4+/-7.2 WU) were obtained from catheterization data on 40 patients with a TCPC. TCPC resistances (0.39+/-0.26 WU) were established using computational fluid dynamic simulations conducted on anatomically accurate three-dimensional models reconstructed from MRI (n=16). These parameters were used in a lumped parameter model of the SV circulation to investigate the impact of TCPC resistance on SV hemodynamics under resting and exercise conditions. A biventricular model was used for comparison. For a biventricular circulation, the cardiac output (CO) dependence on TCPC resistance was negligible (sensitivity=-0.064 l.min(-1).WU(-1)) but not for the SV circulation (sensitivity=-0.88 l.min(-1).WU(-1)). The capacity to increase CO with heart rate was also severely reduced for the SV. At a simulated heart rate of 150 beats/min, the SV patient with the highest resistance (1.08 WU) had a significantly lower increase in CO (20.5%) compared with the SV patient with the lowest resistance (50%) and normal circulation (119%). This was due to the increased afterload (+35%) and decreased preload (-12%) associated with the SV circulation. In conclusion, TCPC resistance has a significant impact on resting hemodynamics and the exercise capacity of patients with a SV physiology.  相似文献   

13.
14.
Michael Lynch 《Genetics》2009,182(1):295-301
A new generation of high-throughput sequencing strategies will soon lead to the acquisition of high-coverage genomic profiles of hundreds to thousands of individuals within species, generating unprecedented levels of information on the frequencies of nucleotides segregating at individual sites. However, because these new technologies are error prone and yield uneven coverage of alleles in diploid individuals, they also introduce the need for novel methods for analyzing the raw read data. A maximum-likelihood method for the estimation of allele frequencies is developed, eliminating both the need to arbitrarily discard individuals with low coverage and the requirement for an extrinsic measure of the sequence error rate. The resultant estimates are nearly unbiased with asymptotically minimal sampling variance, thereby defining the limits to our ability to estimate population-genetic parameters and providing a logical basis for the optimal design of population-genomic surveys.  相似文献   

15.
Quantitative predictions in computational life sciences are often based on regression models. The advent of machine learning has led to highly accurate regression models that have gained widespread acceptance. While there are statistical methods available to estimate the global performance of regression models on a test or training dataset, it is often not clear how well this performance transfers to other datasets or how reliable an individual prediction is–a fact that often reduces a user’s trust into a computational method. In analogy to the concept of an experimental error, we sketch how estimators for individual prediction errors can be used to provide confidence intervals for individual predictions. Two novel statistical methods, named CONFINE and CONFIVE, can estimate the reliability of an individual prediction based on the local properties of nearby training data. The methods can be applied equally to linear and non-linear regression methods with very little computational overhead. We compare our confidence estimators with other existing confidence and applicability domain estimators on two biologically relevant problems (MHC–peptide binding prediction and quantitative structure-activity relationship (QSAR)). Our results suggest that the proposed confidence estimators perform comparable to or better than previously proposed estimation methods. Given a sufficient amount of training data, the estimators exhibit error estimates of high quality. In addition, we observed that the quality of estimated confidence intervals is predictable. We discuss how confidence estimation is influenced by noise, the number of features, and the dataset size. Estimating the confidence in individual prediction in terms of error intervals represents an important step from plain, non-informative predictions towards transparent and interpretable predictions that will help to improve the acceptance of computational methods in the biological community.  相似文献   

16.
We provide a novel method, DRISEE (duplicate read inferred sequencing error estimation), to assess sequencing quality (alternatively referred to as "noise" or "error") within and/or between sequencing samples. DRISEE provides positional error estimates that can be used to inform read trimming within a sample. It also provides global (whole sample) error estimates that can be used to identify samples with high or varying levels of sequencing error that may confound downstream analyses, particularly in the case of studies that utilize data from multiple sequencing samples. For shotgun metagenomic data, we believe that DRISEE provides estimates of sequencing error that are more accurate and less constrained by technical limitations than existing methods that rely on reference genomes or the use of scores (e.g. Phred). Here, DRISEE is applied to (non amplicon) data sets from both the 454 and Illumina platforms. The DRISEE error estimate is obtained by analyzing sets of artifactual duplicate reads (ADRs), a known by-product of both sequencing platforms. We present DRISEE as an open-source, platform-independent method to assess sequencing error in shotgun metagenomic data, and utilize it to discover previously uncharacterized error in de novo sequence data from the 454 and Illumina sequencing platforms.  相似文献   

17.
Heiden T  Auer G  Tribukait B 《Cytometry》2000,42(3):196-208
Three major parameters in DNA histograms that contribute to the reliability of S-phase analysis were evaluated. These parameters are (1) the extent of background in relation to the amount of S-phase cells (and the validity of its subtraction), (2) the size of the "free" S-phase range (S(free)), and (3) the sampling error of cell counting. Tests in histograms obtained from surgical biopsies by flow cytometry (FCM) showed that the background subtraction is reliable if the found S-phase fraction is higher than the fraction of background events in the histogram range of the cell population. The size of S(free) was determined in computer-generated test histograms as a function of variables such as the coefficient of variation (CV) and the DNA index (DI). To calculate the sampling error of cell counting above background and in S(free), a model was developed that was validated by experimental data. This error can serve as an indicator of the uncertainty in S-phase analysis. The poor correlation found between %S values measured by image cytometry (ICM) and FCM in surgical biopsies was assigned to high uncertainty by low cell numbers in ICM histograms. A method is proposed to estimate quantitatively the reliability of S-phase analysis that can facilitate the interpretation of results.  相似文献   

18.
Calculation of mechanical stresses and strains in the left ventricular (LV) myocardium by the finite element (FE) method relies on adequate knowledge of the material properties of myocardial tissue. In this paper, we present a model-based estimation procedure to characterize the stress-strain relationship in passive LV myocardium. A 3D FE model of the LV myocardium was used, which included morphological fiber and sheet structure and a nonlinear orthotropic constitutive law with different stiffness in the fiber, sheet, and sheet-normal directions. The estimation method was based on measured wall strains. We analyzed the method's ability to estimate the material parameters by generating a set of synthetic strain data by simulating the LV inflation phase with known material parameters. In this way we were able to verify the correctness of the solution and to analyze the effects of measurement and model error on the solution accuracy and stability. A sensitivity analysis was performed to investigate the observability of the material parameters and to determine which parameters to estimate. The results showed a high degree of coupling between the parameters governing the stiffness in each direction. Thus, only one parameter in each of the three directions was estimated. For the tested magnitudes of added noise and introduced model errors, the resulting estimated stress-strain characteristics in the fiber and sheet directions converged with good accuracy to the known relationship. The sheet-normal stress-strain relationship had a higher degree of uncertainty as more noise was added and model error was introduced.  相似文献   

19.
To extract full information from samples of DNA sequence data, it is necessary to use sophisticated model-based techniques such as importance sampling under the coalescent. However, these are limited in the size of datasets they can handle efficiently. Chen and Liu (2000) introduced the idea of stopping-time resampling and showed that it can dramatically improve the efficiency of importance sampling methods under a finite-alleles coalescent model. In this paper, a new framework is developed for designing stopping-time resampling schemes under more general models. It is implemented on data both from infinite sites and stepwise models of mutation, and extended to incorporate crossover recombination. A simulation study shows that this new framework offers a substantial improvement in the accuracy of likelihood estimation over a range of parameters, while a direct application of the scheme of Chen and Liu (2000) can actually diminish the estimate. The method imposes no additional computational burden and is robust to the choice of parameters.  相似文献   

20.
Multiple copy sampling and the bond scaling-relaxation technique are combined to generate 3-dimensional conformations of protein loop segments. The computational efficiency and sensitivity to initial loop copy dispersion are analyzed. The multicopy loop modeling method requires approximately 20-50% of the computational time required by the single-copy method for the various protein segments tested. An analytical formula is proposed to estimate the computational gain prior to carrying out a multicopy simulation. When 7-residue loops within flexible proteins are modeled, each multicopy simulation can sample a set of loop conformations with initial dispersions up to +/- 15 degrees for backbone and +/- 30 degrees for side-chain rotatable dihedral angles. The dispersions are larger for shorter and smaller for longer and/or surface loops. The degree of convergence of loop copies during a simulation can be used to complement commonly used target functions (such as potential energy) for distinguishing between native and misfolded conformations. Furthermore, this convergence also reflects the conformational flexibility of the modeled protein segment. Application to simultaneously building all 6 hypervariable loops of an antibody is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号