首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Wiener filtration of average evoked potentials supplies the estimate of evoked response with the least square error. However, this error is dependent on the choice of interstimulating intervals. This dependence was not considered in hitherto applications of the Wiener filter. In this paper the Wiener filter with respect to the concrete choice of interstimulating intervals is derived.  相似文献   

2.
The visualization of volume maps obtained by electron tomographic reconstruction is severely hampered by noise. As electron tomography is usually applied to individual, nonrepeatable structures, e.g., cell sections or cell organelles, the noise cannot be removed by averaging as is done implicitly in electron crystallography or explicitly in single particle analysis. In this paper, an approach for noise reduction is presented, based on a multiscale transformation, e.g., the wavelet transformation, in conjunction with a nonlinear filtration of the transform coefficients. After a brief introduction to the theoretical background, the effect of this type of noise reduction is demonstrated by test calculations as well as by applications to tomographic reconstructions of ice-embedded specimens. Regarding noise reduction and structure preservation, the method turns out to be superior to conventional filter techniques, such as the median filter or the Wiener filter. Results obtained with the use of different types of multiscale transformations are compared and the choice of suitable filter parameters is discussed.  相似文献   

3.
A method is described to test the predictability of impulse responses from responses to Gaussiandistributed random stimulation by means of the reverse correlation analysis. In addition, this analysis is tested as to whether it can handle responses of nonlinear systems to random inputs of strongly limited frequency content, which is often the case in data from physiological experiments. The basis for all computation is a simple backward averaging (peri-spike averaging, Istorder PSA) of the noise input triggered from the output pulsatile events, which was extended to two-dimensional peri-spike averaging (2nd-order PSA). These functions were shown to represent the 1st- and 2nd-order Wiener kernel and were taken to calculate the 1st-and 2nd-order response predictions to a given short random test sequence. Different models of impulse-initiating mechanisms were tested for their expression of nonlinearities in these PSAs. Output impulse densities of test sequence (the observed response) could be fairly well approximated by the result of the computations (the predicted response). The difference between observation and prediction was evaluated and expressed as the mean-least squares error. In some of the data the 2nd-order kernel seems sufficient to account for the major nonlinear component, in others, kernels of orders higher than two.  相似文献   

4.
Ability to estimate motor unit propagation velocity correctly using different two-channel methods for delay estimation and different non-invasive spatial filters was analysed by simulation. It was established that longitudinal double difference electrodes could be not a better choice than simple bipolar parallel electrodes. Spatial filtration with a new multi-electrode (performing difference between signals detected by two transversal double difference electrodes positioned along the muscle fibres) promises to give the best estimate. Delay estimation between reference points is more preferable than that based on the cross-correlation technique, which is considerably sensitive to the fundamental properties of the muscle fibre extracellular fields. Preliminary averaging and approximation of the appropriate parts of the signals around chosen reference points could reduce the larger noise sensitivity and the effects of local tissue inhomogeneities as well as eliminate the sampling problem. A correct estimate of the propagation velocity could be impossible, even in the case of not very deep motor units (15 or 10 mm, depending on the spatial filter used) with relatively long (about 120 mm) muscle fibres. In the case of fibres with asymmetrical location of the end-plates in respect to the fibre ends, the propagation velocity estimates could be additionally biased above the longer semilength of the motor unit fibres.  相似文献   

5.
Is it better to design a classifier and estimate its error on the full sample or to design a classifier on a training subset and estimate its error on the holdout test subset? Full-sample design provides the better classifier; nevertheless, one might choose holdout with the hope of better error estimation. A conservative criterion to decide the best course is to aim at a classifier whose error is less than a given bound. Then the choice between full-sample and holdout designs depends on which possesses the smaller expected bound. Using this criterion, we examine the choice between holdout and several full-sample error estimators using covariance models and a patient-data model. Full-sample design consistently outperforms holdout design. The relation between the two designs is revealed via a decomposition of the expected bound into the sum of the expected true error and the expected conditional standard deviation of the true error.  相似文献   

6.
Modeling vital rates improves estimation of population projection matrices   总被引:1,自引:1,他引:0  
Population projection matrices are commonly used by ecologists and managers to analyze the dynamics of stage-structured populations. Building projection matrices from data requires estimating transition rates among stages, a task that often entails estimating many parameters with few data. Consequently, large sampling variability in the estimated transition rates increases the uncertainty in the estimated matrix and quantities derived from it, such as the population multiplication rate and sensitivities of matrix elements. Here, we propose a strategy to avoid overparameterized matrix models. This strategy involves fitting models to the vital rates that determine matrix elements, evaluating both these models and ones that estimate matrix elements individually with model selection via information criteria, and averaging competing models with multimodel averaging. We illustrate this idea with data from a population of Silene acaulis (Caryophyllaceae), and conduct a simulation to investigate the statistical properties of the matrices estimated in this way. The simulation shows that compared with estimating matrix elements individually, building population projection matrices by fitting and averaging models of vital-rate estimates can reduce the statistical error in the population projection matrix and quantities derived from it.  相似文献   

7.
Saccadic averaging is the phenomenon that two simultaneously presented retinal inputs result in a saccade with an endpoint located on an intermediate position between the two stimuli. Recordings from neurons in the deeper layers of the superior colliculus have revealed neural correlates of saccade averaging, indicating that it takes place at this level or upstream. Recently, we proposed a neural network for internal feedback in saccades. This neural network model is different from other models in that it suggests the possibility that averaging takes place in a stage upstream of the colliculus. The network consists of output units representing the neural map of the deeper layers of the superior colliculus and hidden layers imitating areas in the posterior parietal cortex. The deeper layers of the superior colliculus represent the motor error of a desired saccade, e.g. an eye movement to a visual target. In this article we show that averaging is an emergent property of the proposed network. When two retinal targets with different intensities are simultaneously presented to the network, the activity in the output layer represents a single motor error with a weighted average value. Our goal is to understand the mechanism of weighted averaging in this neural network. It appears that averaging in the model is caused by the linear dependence of the net input, received by the hidden units, on retinal error, independent of its retinal coding format. For nonnormalized retinal error inputs, also the nonlinearity between the net input and the activity of the hidden units plays a role in the averaging process. The averaging properties of the model are in agreement with physiological experiments if the hypothetical retinal error input map is normalized. The neural network predicts that if this normalization is overruled by electrical stimulation, averaging still takes place. However, in this case – as a consequence of the feedback task – the location of the resulting saccade depends on the initial eye position and the total intensity/current applied at the two locations. This could be a way to verify the neural network model. If the assumptions for the model are valid, a physiological implication of this paper is that averaging of saccades takes place upstream of the superior colliculus. Received: 22 June 1997 / Accepted in revised form: 19 February 1998  相似文献   

8.
The spike interval histogram, a commonly used tool for the analysis of neuronal spike trains, is evaluated as a statistical estimator of the probability density function (pdf) ofinterspike intervals. Using a mean square error criterion, it is concluded that a Parzen convolution estimate of the pdf is superior to the conventional histogram procedure. The Parzen estimate using a Gaussian weighting function reduces the number of intervals required to achieve a given error by a factor of 5–10. The Parzen estimation procedure has been implemented in the sequential interval histogram (SQIH) procedure for analysis of non-stationary spike trains. Segments of the spike train are defined using a moving window and the pdf for each segment is estimated sequentially. The procedure which we have found most practical is interactive with the user and utlizes the theoretical results of the error analysis as guidelines for the evolution of an estimation strategy. The SQIH procedure appears useful both as a criterion for stationarity and as a means to characterize non-stationary activity.Portions of this work were presented at the Symposium on Computer Technology in Neuroscience Research, West Virginia University Medical Center, Morgantown, West Virginia, USA, April, 1975.  相似文献   

9.
We introduce a method for processing visual evoked potentials, on the basis of a Wiener filter algorithm applied to a small number of consecutive responses. The transfer function of the filter is obtained by taking into account both the average of 99 sweeps (as an estimate of the true signal) and the EEG signal just before the stimulus onset (as an estimate of the noise superimposed on each individual response). The process acts as a sweep-by-sweep filter (in the sense of the mean square error) which considers the possible non-stationarities of the EEG signal during a complete clinical procedure. The average of a small number of consecutive filtered sweeps reveals variations in the morphology of the evoked responses which produce a change in the principal latencies. Applications are foreseen in neurophysiological studies of visual evoked potential responses, and in the clinic, where it is important to evaluate adaptive mechanisms, dynamic changes in single groups of visual evoked potentials and cognitive responses.  相似文献   

10.
A package of methods and an experimental set-up for the analysis of dynamics of electrical signals from the brain. The methods described and discussed in this study allow detailed and multipurpose analysis of brain potentials both in the time and frequency domains. Special emphasis is given to a new computer-method introduced in this study: A posteriori selective averaging. The selective averaging method is compared with Wiener Filter Estimation of Evoked Potentials.  相似文献   

11.

Background

An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC) procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error.

Methodology/Principal Findings

Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals.

Conclusions/Significance

It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed.  相似文献   

12.
The routine assignment of error rates (confidence intervals) to Poisson distribution estimates of plankton abundance should be rejected. In addition to the interval estimation procedure being pseudoreplicative, it is not robust to common violations of its assumptions. Because the spatial dispersion of organisms in sampling units from the counting chamber to the field is rarely random and because counting protocols are usually terminated by a count threshold having been equalled or exceeded, Poisson based estimates are usually derived from sampling non-Poisson distributions. Computer simulation was used to investigate the quantitative consequences of such estimates. The expected mean error rate of 95% confidence intervals is inflated from 5% to 15% as contagion increases, as the parametric variance-mean ratio increases from 1 to 2. Also, count threshold termination of the counting protocol effects both a biased estimate of the parametric mean (or total) and alters expected mean error rates, especially if the total count is low (< 100 organisms) and the mean density in the sampling unit is low.  相似文献   

13.
Quantitative predictions in computational life sciences are often based on regression models. The advent of machine learning has led to highly accurate regression models that have gained widespread acceptance. While there are statistical methods available to estimate the global performance of regression models on a test or training dataset, it is often not clear how well this performance transfers to other datasets or how reliable an individual prediction is–a fact that often reduces a user’s trust into a computational method. In analogy to the concept of an experimental error, we sketch how estimators for individual prediction errors can be used to provide confidence intervals for individual predictions. Two novel statistical methods, named CONFINE and CONFIVE, can estimate the reliability of an individual prediction based on the local properties of nearby training data. The methods can be applied equally to linear and non-linear regression methods with very little computational overhead. We compare our confidence estimators with other existing confidence and applicability domain estimators on two biologically relevant problems (MHC–peptide binding prediction and quantitative structure-activity relationship (QSAR)). Our results suggest that the proposed confidence estimators perform comparable to or better than previously proposed estimation methods. Given a sufficient amount of training data, the estimators exhibit error estimates of high quality. In addition, we observed that the quality of estimated confidence intervals is predictable. We discuss how confidence estimation is influenced by noise, the number of features, and the dataset size. Estimating the confidence in individual prediction in terms of error intervals represents an important step from plain, non-informative predictions towards transparent and interpretable predictions that will help to improve the acceptance of computational methods in the biological community.  相似文献   

14.
In order that cells respond to environmental cues, they must be able to measure ambient ligand concentration. Concentrations fluctuate, however, because of thermal noise, and one can readily show that estimates based on concentration values at a particular moment will be subject to substantial error. Cells are therefore expected to average their estimates over some limited time period. In this paper we assume that a cell uses fractional receptor occupancy as a measure of ambient ligand concentration and develop general expressions for the error a cell makes because the length of the averaging period is necessarily limited. Our analysis is general, relieving many of the assumptions underlying the seminal work of Berg and Purcell. The most important formal difference is our inclusion of occupancy-dependent dissociation--a phenomenon that has been well-documented for many systems. In addition, our formulation permits signal averaging to begin before chemical equilibrium has been established and it allows binding kinetics to be nonlinear (i.e., biomolecular rather than pseudo-first-order). The results are applied to spatial and temporal concentration gradients. In particular we estimate the minimum averaging times required for cells to detect such gradients under typical in vitro conditions. These estimates involve assigning numerical values to receptor ligand rate constants. If the rate constants are at their maximum possible values (limited only by center of mass diffusion), then either temporal or spatial gradients can be detected in minutes or less. If, however, as suggested by experiments, the rate constants are several orders of magnitude below their diffusion-limited values, then under typical constant gradient conditions the time required to detect a spatial gradient is prohibitively long, whereas temporal gradients can still be detected in reasonable lengths of time. This result was obtained for large cells such as lymphocytes, as well as for the smaller, bacterial cells. The ratio of averaging times for the two mechanisms--amounting to several orders of magnitude--is well beyond what could be reconciled by limitations of the calculation, and strongly suggests heavy reliance on temporal sensing mechanisms under typical in vitro conditions with constant spatial gradients.  相似文献   

15.
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange''s method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.  相似文献   

16.
The problem of average potentials filtration in the case of periodic stimulation is discussed in the paper. Some considerations proving the correctness of Wiener filtering processes suggested by Doyle are presented here. The basic assumtion of the proof is an approximation of the noise power density by means of a step function.  相似文献   

17.
A statistical analysis of digital a posteriori Wiener filtering as applied to time averaging techniques for biological signals is presented. The authors show that when ap.w.f. is applied to the average signal hardly any effect can be expected, where as when applied to the individual responses, ap.w.f. improves the signal to noise ratio. The applied analysis leads to a simple test to check whether a prescribed frequency component is present.  相似文献   

18.
The Wiener filter is a standard means of optimizing the signal in sums of aligned, noisy images obtained by electron cryo-microscopy (cryo-EM). However, estimation of the resolution-dependent (“spectral”) signal-to-noise ratio (SSNR) from the input data has remained problematic, and error reduction due to specific application of the SSNR term within a Wiener filter has not been reported. Here we describe an adjustment to the Wiener filter for optimal summation of images of isolated particles surrounded by large regions of featureless background, as is typically the case in single-particle cryo-EM applications. We show that the density within the particle area can be optimized, in the least-squares sense, by scaling the SSNR term found in the conventional Wiener filter by a factor that reflects the fraction of the image field occupied by the particle. We also give related expressions that allow the SSNR to be computed for application in this new filter, by incorporating a masking step into a Fourier Ring Correlation (FRC), a standard resolution measure. Furthermore, we show that this masked FRC estimation scheme substantially improves on the accuracy of conventional SSNR estimation methods. We demonstrate the validity of our new approach in numeric tests with simulated data corresponding to realistic cryo-EM imaging conditions. This variation of the Wiener filter and accompanying derivation should prove useful for a variety of single-particle cryo-EM applications, including 3D reconstruction.  相似文献   

19.
This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior.  相似文献   

20.
Error propagation and scaling for tropical forest biomass estimates   总被引:10,自引:0,他引:10  
The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号