首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Traditional approaches to the problem of parameter estimation in biophysical models of neurons and neural networks usually adopt a global search algorithm (for example, an evolutionary algorithm), often in combination with a local search method (such as gradient descent) in order to minimize the value of a cost function, which measures the discrepancy between various features of the available experimental data and model output. In this study, we approach the problem of parameter estimation in conductance-based models of single neurons from a different perspective. By adopting a hidden-dynamical-systems formalism, we expressed parameter estimation as an inference problem in these systems, which can then be tackled using a range of well-established statistical inference methods. The particular method we used was Kitagawa's self-organizing state-space model, which was applied on a number of Hodgkin-Huxley-type models using simulated or actual electrophysiological data. We showed that the algorithm can be used to estimate a large number of parameters, including maximal conductances, reversal potentials, kinetics of ionic currents, measurement and intrinsic noise, based on low-dimensional experimental data and sufficiently informative priors in the form of pre-defined constraints imposed on model parameters. The algorithm remained operational even when very noisy experimental data were used. Importantly, by combining the self-organizing state-space model with an adaptive sampling algorithm akin to the Covariance Matrix Adaptation Evolution Strategy, we achieved a significant reduction in the variance of parameter estimates. The algorithm did not require the explicit formulation of a cost function and it was straightforward to apply on compartmental models and multiple data sets. Overall, the proposed methodology is particularly suitable for resolving high-dimensional inference problems based on noisy electrophysiological data and, therefore, a potentially useful tool in the construction of biophysical neuron models.  相似文献   

2.
Kernel density smoothing techniques have been used in classification or supervised learning of gene expression profile (GEP) data, but their applications to clustering or unsupervised learning of those data have not been explored and assessed. Here we report a kernel density clustering method for analysing GEP data and compare its performance with the three most widely-used clustering methods: hierarchical clustering, K-means clustering, and multivariate mixture model-based clustering. Using several methods to measure agreement, between-cluster isolation, and withincluster coherence, such as the Adjusted Rand Index, the Pseudo F test, the r(2) test, and the profile plot, we have assessed the effectiveness of kernel density clustering for recovering clusters, and its robustness against noise on clustering both simulated and real GEP data. Our results show that the kernel density clustering method has excellent performance in recovering clusters from simulated data and in grouping large real expression profile data sets into compact and well-isolated clusters, and that it is the most robust clustering method for analysing noisy expression profile data compared to the other three methods assessed.  相似文献   

3.
As plankton biologists ask more detailed questions of necessarilysparse and noisy spatial data, the need for well founded methodsfor statistical analysis of such data grows. This note examinesthe utility of constrained thin-plate smoothing splines as atool for inferring underlying spatial distribution functionsfrom sparse noisy data. Constrained thin-plate splines are describedin a straightforward manner. An economical method of calculationis suggested, which sacrifices mathematical optimality for easeof computation. Using simulated data several methods for choosingthe complexity of the inferred distribution function are comparedand robustness to large amplitude noise is examined. Confidenceintervals are calculated and tested. The method is applied toegg data from Dover sole (Solea solea) in the Bristol Channel.  相似文献   

4.
MOTIVATION: Mass spectrometry (MS) is increasingly being used for biomedical research. The typical analysis of MS data consists of several steps. Feature extraction is a crucial step since subsequent analyses are performed only on the detected features. Current methodologies applied to low-resolution MS, in which features are peaks or wavelet functions, are parameter-sensitive and inaccurate in the sense that peaks and wavelet functions do not directly correspond to the underlying molecules under observation. In high-resolution MS, the model-based approach is more appealing as it can provide a better representation of the MS signals by incorporating information about peak shapes and isotopic distributions. Current model-based techniques are computationally expensive; various algorithms have been proposed to improve the computational efficiency of this paradigm. However, these methods cannot deal well with overlapping features, especially when they are merged to create one broad peak. In addition, no method has been proven to perform well across different MS platforms. RESULTS: We suggest a new model-based approach to feature extraction in which spectra are decomposed into a mixture of distributions derived from peptide models. By incorporating kernel-based smoothing and perceptual similarity for matching distributions, our statistical framework improves existing methodologies in terms of computational efficiency and the accuracy of the results. Our model is parameterized by physical properties and is therefore applicable to different MS instruments and settings. We validate our approach on simulated data, and show that the performance is higher than commonly used tools on real high- and low-resolution MS, and MS/MS data sets.  相似文献   

5.
In biomechanical joint-motion analyses, the continuous motion to be studied is often approximated by a sequence of finite displacements, and the Finite Helical Axis (FHA) or "screw axis" for each displacement is estimated from position measurements on a number of anatomical or artificial landmarks. When FHA parameters are directly determined from raw (noisy) displacement data, both the position and the direction of the FHA are ill-determined, in particular when the sequential displacement steps are small. This implies, that under certain conditions, the continuous pathways of joint motions cannot be adequately described. The purpose of the present experimental study is to investigate the applicability of smoothing (or filtering) techniques, in those cases where FHA parameters are ill-determined. Two different quintic-spline smoothing methods were used to analyze the motion data obtained with Roentgenstereophotogrammetry in two experiments. One concerning carpal motions in a wrist-joint specimen, and one relative to a kinematic laboratory model, in which the axis positions are a priori known. The smoothed and non-smoothed FHA parameter errors were compared. The influences of the number of samples and the size of the sampling interval (displacement step) were investigated, as were the effects of equidistant and nonequidistant sampling conditions and noise invariance.  相似文献   

6.
Bioluminescence techniques allow accurate monitoring of the circadian clock in single cells. We have analyzed bioluminescence data of Per gene expression in mouse SCN neurons and fibroblasts. From these data, we extracted parameters such as damping rate and noise intensity using two simple mathematical models, one describing a damped oscillator driven by noise, and one describing a self-sustained noisy oscillator. Both models describe the data well and enabled us to quantitatively characterize both wild-type cells and several mutants. It has been suggested that the circadian clock is self-sustained at the single cell level, but we conclude that present data are not sufficient to determine whether the circadian clock of single SCN neurons and fibroblasts is a damped or a self-sustained oscillator. We show how to settle this question, however, by testing the models'' predictions of different phases and amplitudes in response to a periodic entrainment signal (zeitgeber).  相似文献   

7.

Background  

Quantitative proteomics technologies have been developed to comprehensively identify and quantify proteins in two or more complex samples. Quantitative proteomics based on differential stable isotope labeling is one of the proteomics quantification technologies. Mass spectrometric data generated for peptide quantification are often noisy, and peak detection and definition require various smoothing filters to remove noise in order to achieve accurate peptide quantification. Many traditional smoothing filters, such as the moving average filter, Savitzky-Golay filter and Gaussian filter, have been used to reduce noise in MS peaks. However, limitations of these filtering approaches often result in inaccurate peptide quantification. Here we present the WaveletQuant program, based on wavelet theory, for better or alternative MS-based proteomic quantification.  相似文献   

8.
Phase separation is a thermodynamic process leading to the formation of compositionally distinct phases. For the past few years, numerous works have shown that biomolecular phase separation serves as biogenesis mechanisms of diverse intracellular condensates, and aberrant phase transitions are associated with disease states such as neurodegenerative diseases and cancers. Condensates exhibit rich phase behaviors including multiphase internal structuring, noise buffering, and compositional tunability. Recent studies have begun to uncover how a network of intermolecular interactions can give rise to various biophysical features of condensates. Here, we review phase behaviors of biomolecules, particularly with regard to regular solution models of binary and ternary mixtures. We discuss how these theoretical frameworks explain many aspects of the assembly, composition, and miscibility of diverse biomolecular phases, and highlight how a model-based approach can help elucidate the detailed thermodynamic principle for multicomponent intracellular phase separation.  相似文献   

9.
Improvements to particle tracking algorithms are required to effectively analyze the motility of biological molecules in complex or noisy systems. A typical single particle tracking (SPT) algorithm detects particle coordinates for trajectory assembly. However, particle detection filters fail for data sets with low signal-to-noise levels. When tracking molecular motors in complex systems, standard techniques often fail to separate the fluorescent signatures of moving particles from background signal. We developed an approach to analyze the motility of kinesin motor proteins moving along the microtubule cytoskeleton of extracted neurons using the Kullback-Leibler divergence to identify regions where there are significant differences between models of moving particles and background signal. We tested our software on both simulated and experimental data and found a noticeable improvement in SPT capability and a higher identification rate of motors as compared with current methods. This algorithm, called Cega, for “find the object,” produces data amenable to conventional blob detection techniques that can then be used to obtain coordinates for downstream SPT processing. We anticipate that this algorithm will be useful for those interested in tracking moving particles in complex in vitro or in vivo environments.  相似文献   

10.
This paper reviews data acquisition and signal processing issues relative to producing an amplitude estimate of surface EMG. The paper covers two principle areas. First, methods for reducing noise, artefact and interference in recorded EMG are described. Wherever possible noise should be reduced at the source via appropriate skin preparation, and the use of well designed active electrodes and signal recording instrumentation. Despite these efforts, some noise will always accompany the desired signal, thus signal processing techniques for noise reduction (e.g. band-pass filtering, adaptive noise cancellation filters and filters based on the wavelet transform) are discussed. Second, methods for estimating the amplitude of the EMG are reviewed. Most advanced, high-fidelity methods consist of six sequential stages: noise rejection/filtering, whitening, multiple-channel combination, amplitude demodulation, smoothing and relinearization. Theoretical and experimental research related to each of the above topics is reviewed and the current recommended practices are described.  相似文献   

11.
The fidelity of the trajectories obtained from video-based particle tracking determines the success of a variety of biophysical techniques, including in situ single cell particle tracking and in vitro motility assays. However, the image acquisition process is complicated by system noise, which causes positioning error in the trajectories derived from image analysis. Here, we explore the possibility of reducing the positioning error by the application of a Kalman filter, a powerful algorithm to estimate the state of a linear dynamic system from noisy measurements. We show that the optimal Kalman filter parameters can be determined in an appropriate experimental setting, and that the Kalman filter can markedly reduce the positioning error while retaining the intrinsic fluctuations of the dynamic process. We believe the Kalman filter can potentially serve as a powerful tool to infer a trajectory of ultra-high fidelity from noisy images, revealing the details of dynamic cellular processes.  相似文献   

12.
Summary A data processing method is described which reduces the effects of t1 noise artifacts and improves the presentation of 2D NMR spectral data. A t1 noise profile is produced by measuring the average noise in each column. This profile is then used to determine weighting coefficients for a sliding weighted smoothing filter that is applied to each row, such that the amount of smoothing each point receives is proportional to both its estimated t1 noise level and the level of t1 noise of neighbouring points. Thus, points in the worst t1 noise bands receive the greatest smoothing, whereas points in low-noise regions remain relatively unaffected. In addition, weighted smoothing allows points in low-noise regions to influence neighbouring points in noisy regions. This method is also effective in reducing the noise artifacts associated with the solvent resonance in spectra of biopolymers in aqueous solution. Although developed primarily to improve the quality of 2D NMR spectra of biopolymers prior to automated analysis, this approach should enhance processing of spectra of a wide range of compounds and can be used whenever noise occurs in discrete bands in one dimension of a multi-dimensional spectrum.  相似文献   

13.
Cao J  Fussmann GF  Ramsay JO 《Biometrics》2008,64(3):959-967
Summary .   Ordinary differential equations (ODEs) are widely used in ecology to describe the dynamical behavior of systems of interacting populations. However, systems of ODEs rarely provide quantitative solutions that are close to real field observations or experimental data because natural systems are subject to environmental and demographic noise and ecologists are often uncertain about the correct parameterization. In this article we introduce "parameter cascades" as an improved method to estimate ODE parameters such that the corresponding ODE solutions fit the real data well. This method is based on the modified penalized smoothing with the penalty defined by ODEs and a generalization of profiled estimation, which leads to fast estimation and good precision for ODE parameters from noisy data. This method is applied to a set of ODEs originally developed to describe an experimental predator–prey system that undergoes oscillatory dynamics. The new parameterization considerably improves the fit of the ODE model to the experimental data sets. At the same time, our method reveals that important structural assumptions that underlie the original ODE model are essentially correct. The mathematical formulations of the two nonlinear interaction terms (functional responses) that link the ODEs in the predator–prey model are validated by estimating the functional responses nonparametrically from the real data. We suggest two major applications of "parameter cascades" to ecological modeling: It can be used to estimate parameters when original data are noisy, missing, or when no reliable priori estimates are available; it can help to validate the structural soundness of the mathematical modeling approach.  相似文献   

14.
《Biophysical journal》2021,120(20):4472-4483
Single-molecule (SM) approaches have provided valuable mechanistic information on many biophysical systems. As technological advances lead to ever-larger data sets, tools for rapid analysis and identification of molecules exhibiting the behavior of interest are increasingly important. In many cases the underlying mechanism is unknown, making unsupervised techniques desirable. The divisive segmentation and clustering (DISC) algorithm is one such unsupervised method that idealizes noisy SM time series much faster than computationally intensive approaches without sacrificing accuracy. However, DISC relies on a user-selected objective criterion (OC) to guide its estimation of the ideal time series. Here, we explore how different OCs affect DISC’s performance for data typical of SM fluorescence imaging experiments. We find that OCs differing in their penalty for model complexity each optimize DISC’s performance for time series with different properties such as signal/noise and number of sample points. Using a machine learning approach, we generate a decision boundary that allows unsupervised selection of OCs based on the input time series to maximize performance for different types of data. This is particularly relevant for SM fluorescence data sets, which often have signal/noise near the derived decision boundary and include time series of nonuniform length because of stochastic bleaching. Our approach, AutoDISC, allows unsupervised per-molecule optimization of DISC, which will substantially assist in the rapid analysis of high-throughput SM data sets with noisy samples and nonuniform time windows.  相似文献   

15.
Functional neuroimaging techniques such as functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS) can be used to isolate an evoked response to a stimulus from significant background physiological fluctuations. Data analysis approaches typically use averaging or linear regression to remove this physiological baseline with varying degrees of success. Biophysical model-based analysis of the functional hemodynamic response has also been advanced previously with the Balloon and Windkessel models. In the present work, a biophysical model of systemic and cerebral circulation and gas exchange is applied to resting state NIRS neuroimaging data from 10 human subjects. The model further includes dynamic cerebral autoregulation, which modulates the cerebral arteriole compliance to control cerebral blood flow. This biophysical model allows for prediction, from noninvasive blood pressure measurements, of the background hemodynamic fluctuations in the systemic and cerebral circulations. Significantly higher correlations with the NIRS data were found using the biophysical model predictions compared to blood pressure regression and compared to transfer function analysis (multifactor ANOVA, p < 0.0001). This finding supports the further development and use of biophysical models for removing baseline activity in functional neuroimaging analysis. Future extensions of this work could model changes in cerebrovascular physiology that occur during development, aging, and disease.  相似文献   

16.
Automatic recording of birdsong is becoming the preferred way to monitor and quantify bird populations worldwide. Programmable recorders allow recordings to be obtained at all times of day and year for extended periods of time. Consequently, there is a critical need for robust automated birdsong recognition. One prominent obstacle to achieving this is low signal to noise ratio in unattended recordings. Field recordings are often very noisy: birdsong is only one component in a recording, which also includes noise from the environment (such as wind and rain), other animals (including insects), and human-related activities, as well as noise from the recorder itself. We describe a method of denoising using a combination of the wavelet packet decomposition and band-pass or low-pass filtering, and present experiments that demonstrate an order of magnitude improvement in noise reduction over natural noisy bird recordings.  相似文献   

17.
Unbiased interpretation of noisy single molecular motor recordings remains a challenging task. To address this issue, we have developed robust algorithms based on hidden Markov models (HMMs) of motor proteins. The basic algorithm, called variable-stepsize HMM (VS-HMM), was introduced in the previous article. It improves on currently available Markov-model based techniques by allowing for arbitrary distributions of step sizes, and shows excellent convergence properties for the characterization of staircase motor timecourses in the presence of large measurement noise. In this article, we extend the VS-HMM framework for better performance with experimental data. The extended algorithm, variable-stepsize integrating-detector HMM (VSI-HMM) better models the data-acquisition process, and accounts for random baseline drifts. Further, as an extension, maximum a posteriori estimation is provided. When used as a blind step detector, the VSI-HMM outperforms conventional step detectors. The fidelity of the VSI-HMM is tested with simulations and is applied to in vitro myosin V data where a small 10 nm population of steps is identified. It is also applied to an in vivo recording of melanosome motion, where strong evidence is found for repeated, bidirectional steps smaller than 8 nm in size, implying that multiple motors simultaneously carry the cargo.  相似文献   

18.
We discuss the statistics of spikes trains for different types of integrate-and-fire neurons and different types of synaptic noise models. In contrast with the usual approaches in neuroscience, mainly based on statistical physics methods such as the Fokker-Planck equation or the mean-field theory, we chose the point of the view of the stochastic calculus theory to characterize neurons in noisy environments. We present four stochastic calculus techniques that can be used to find the probability distributions attached to the spikes trains. We illustrate the power of these techniques for four types of widely used neuron models. Despite the fact that these techniques are mathematically intricate we believe that they can be useful for answering questions in neuroscience that naturally arise from the variability of neuronal activity. For each technique we indicate its range of applicability and its limitations.  相似文献   

19.
Fruit infected by pests or diseases and fruit harvests with different levels of ripeness cause a lack of marketability, decrease in economic value, and increase in crop waste. In this study, we propose a robust and generalized deep convolutional neural network (CNN) model via fine-tuning the pre-trained models for detecting black spot disease and ripeness levels in orange fruit. A dataset containing 1896 confirmed orange images in the farm in four classes (unripe, half-ripe, ripe, and infected with black spot disease) was used. In order to prevent overfitting and increase the robustness and generalizability of the model, instead of using fundamental data augmentation techniques, a novel learning-to-augment strategy that creates new data using noisy and restored images was employed. Controllers using the Bayesian optimization algorithm were utilized to select the optimal noise parameters of Gaussian, speckle, Poisson, and salt-and-pepper noise to generate new noisy images. A convolutional autoencoder model was developed to produce newly restored images affected by optimized noise density. The dataset augmented by the best policies of the learning-to-augment strategy was used to fine-tune several pre-trained models (GoogleNet, ResNet18, ResNet50, ShuffleNet, MobileNetv2, and DenseNet201). The results showed that the learning-to-augment strategy for the fine-tuned ResNet50 achieved the best performance with 99.5% accuracy, and 100% F-measure by assigning images infected with black spot disease as the positive class. The proposed automatic disease and fruit quality monitoring technique can be also used for the detection of other diseases in agriculture and forestry.  相似文献   

20.
Smoothing and differentiation of noisy data using spline functions requires the selection of an unknown smoothing parameter. The method of generalized cross-validation provides an excellent estimate of the smoothing parameter from the data itself even when the amount of noise associated with the data is unknown. In the present model only a single smoothing parameter must be obtained, but in a more general context the number may be larger. In an earlier work, smoothing of the data was accomplished by solving a minimization problem using the technique of dynamic programming. This paper shows how the computations required by generalized cross-validation can be performed as a simple extension of the dynamic programming formulas. The results of numerical experiments are also included.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号