首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Single-channel current seems to be one of the most obvious characteristics of ion transport. But in some cases, its determination is more complex than anticipated at first glance. Problems arise from fast gating in time series of patch-clamp current, which can lead to a reduced apparent (measured) single-channel current. Reduction is caused by undetected averaging over closed and open intervals in the anti-aliasing filter. Here it is shown that fitting the measured amplitude histograms by Beta distributions is an efficient tool of reconstructing the true current level from measured data. This approach becomes even more powerful when it is applied to amplitude distributions-per-level. Simulated time series are employed to show that the error sum is a good guideline for finding the correct current level. Furthermore, they show that a Markov model smaller than the one used for gating analysis can be used for current determination (mostly O-C, i.e., open-closed). This increases the reliability of the Beta fit. The knowledge of the true current level is not only important for the understanding of the biophysical properties of the channel. It is also a prerequisite for the correct determination of the rate constants of gating. The approach is applied to measured data. The examples reveal the limits of the analysis imposed by the signal-to-noise ratio and the shape of the amplitude distribution. One application shows that the negative slope of the I-V curve of the human MaxiK channel expressed in HEK293 cells is caused by fast gating.  相似文献   

2.
Two-dimensional (2D) dwell-time analysis of time series of single-channel patch-clamp current was improved by employing a Hinkley detector for jump detection, introducing a genetic fit algorithm, replacing maximum likelihood by a least square criterion, averaging over a field of 9 or 25 bins in the 2D plane and normalizing per measuring time, not per events. Using simulated time series for the generation of the “theoretical” 2D histograms from assumed Markov models enabled the incorporation of the measured filter response and noise. The effects of these improvements were tested with respect to the temporal resolution, accuracy of the determination of the rate constants of the Markov model, sensitivity to noise and requirement of open time and length of the time series. The 2D fit was better than the classical hidden Markov model (HMM) fit in all tested fields. The temporal resolution of the two most efficient algorithms, the 2D fit and the subsequent HMM/beta fit, enabled the determination of rate constants 10 times faster than the corner frequency of the low-pass filter. The 2D fit was much less sensitive to noise. The requirement of computing time is a problem of the 2D fit (100 times that of the HMM fit) but can now be handled by personal computers. The studies revealed a fringe benefit of 2D analysis: it can reveal the “true” single-channel current when the filter has reduced the apparent current level by averaging over undetected fast gating.  相似文献   

3.
Two-dimensional (2D) dwell-time analysis of time series of single-channel patch-clamp current was improved by employing a Hinkley detector for jump detection, introducing a genetic fit algorithm, replacing maximum likelihood by a least square criterion, averaging over a field of 9 or 25 bins in the 2D plane and normalizing per measuring time, not per events. Using simulated time series for the generation of the "theoretical" 2D histograms from assumed Markov models enabled the incorporation of the measured filter response and noise. The effects of these improvements were tested with respect to the temporal resolution, accuracy of the determination of the rate constants of the Markov model, sensitivity to noise and requirement of open time and length of the time series. The 2D fit was better than the classical hidden Markov model (HMM) fit in all tested fields. The temporal resolution of the two most efficient algorithms, the 2D fit and the subsequent HMM/beta fit, enabled the determination of rate constants 10 times faster than the corner frequency of the low-pass filter. The 2D fit was much less sensitive to noise. The requirement of computing time is a problem of the 2D fit (100 times that of the HMM fit) but can now be handled by personal computers. The studies revealed a fringe benefit of 2D analysis: it can reveal the "true" single-channel current when the filter has reduced the apparent current level by averaging over undetected fast gating.  相似文献   

4.
The maximum-likelihood technique for the direct estimation of rate constants from the measured patch clamp current is extended to the analysis of multi-channel recordings, including channels with subconductance levels. The algorithm utilizes a simplified approach for the calculation of the matrix exponentials of the probability matrix from the rate constants of the Markov model of the involved channel(s) by making use of the Kronecker sum and product. The extension to multi-channel analysis is tested by the application to simulated data. For these tests, three different channel models were selected: a two-state model, a three-state model with two open states of different conductance, and a three-state model with two closed states. For the simulations, time series of these models were calculated from the related first-order, finite-state, continuous-time Markov processes. Blue background noise was added, and the signals were filtered by a digital filter similar to the anti-aliasing low-pass. The tests showed that the fit algorithm revealed good estimates of the original rate constants from time series of simulated records with up to four independent and identical channels even in the case of signal-to-noise ratios being as low as 2. The number of channels in a record can be determined from the dependence of the likelihood on channel number. For large enough data sets, it takes on a maximum when the assumed channel number is equal to the "true" channel number.  相似文献   

5.
Investigating links between nervous system function and behavior requires monitoring neuronal activity at a range of spatial and temporal scales. Here, we summarize recent progress in applying two distinct but complementary approaches to the study of network dynamics in the neocortex. Mesoscopic calcium imaging allows simultaneous monitoring of activity across most of the cortex at moderate spatiotemporal resolution. Electrophysiological recordings provide extremely high temporal resolution of neural signals at multiple targeted locations. A number of recent studies have used these tools to reveal novel patterns of activity across distributed cortical subnetworks. This growing body of work strongly supports the hypothesis that the dynamic coordination of spatially distinct regions is a fundamental aspect of cortical function that supports cognition and behavior.  相似文献   

6.
A method is presented for rapidly extracting single-channel transition rate constants from patch-clamp recordings containing signals from several channels. The procedure is based on a simultaneous fit of the observed dwell-time distributions for all conductance levels, using a maximum likelihood approach. This algorithm allows estimation of single-channel rate constants in cases where more advanced methods may be impractical because of their extremely long computational time. A correction is included for the limited time resolution of the recording system, according to theory developed by Roux and Sauvé (Biophys. J. 48:149-158, 1985), by accounting for the impact of undetected transitions on the dwell-time distributions, and by introducing an improved practical implementation of a fixed dead time for the case of more than one channel. This feature allows application of the method to noisy data, after filtering. A computer program implementing the method is tested successfully on a variety of simulated multichannel current traces.  相似文献   

7.
Quantitative patch-clamp analysis based on dwell-time histograms has to deal with the problem of missed events. The correction of the evaluated time constants has to take into account the characteristics of the detector used for the reconstruction of the time series. In previous approaches a simple model of the detector has been used, which is based on the assumption that all events shorter than the temporal resolution tres were missed, irrespective of the preceding events. Rather than the standard assumption of a fixed dead time, we introduce a more realistic model of a detector by a continuous-time version of the Hinkley detector. The combined state of the channel and the detector obeys a Markov model, which is governed by a Fokker-Planck-Kolmogorov partial differential equation. The steady-state solution leads to the determination of the apparent time constants tau o and tau c depending on the true rate constants koc and kco and the temporal resolution tres of the detector. Simulations with different kinds of detectors, including the Bessel filter with half-amplitude threshold detection, are performed. They show that our new equation predicts the dependence of tau c and tau o on koc, kco, and tres better than the standard equation used until now.  相似文献   

8.
Microsecond gating of ion channels can be evaluated by fitting beta distributions to amplitude histograms of measured time series. The shape of these histograms is determined not only by the rate constants of the gating process (in relation to the filter frequency) but also by baseline noise and shot noise, resulting from the stochastic nature of ion flow. Under normal temporal resolution, the small shot noise can be ignored. This simplification may no longer be legitimate when rate constants reach the range above 1 μs−1. Here, the influence of shot noise is studied by means of simulated time series for several values of single-channel current of the fully open state and baseline noise. Under realistic optimal conditions (16 pA current, 1 pA noise, 50 kHz bandwidth), ignoring the shot noise leads to an underestimation of the rate constants above 1 μs−1 by a factor of about 2.5. However, in that range, the scatter of the evaluated rate constants is at least of the same magnitude, obscuring the systematic error. The incorporation of shot noise into the analysis will become more important when amplifiers with significantly reduced noise become available.  相似文献   

9.
Dendrometers are vital tools for studying the response of trees to intra-annual environmental changes in different temporal resolutions, ranging from hourly, daily to weekly time resolution. Dendrometers are increasingly used in forest management and tree physiological studies. Besides the data analysis, data processing is also challenging, time-consuming and potentially error-prone due to the immense number of measurements generated by self-registering electronic dendrometers. We present the package ‘dendRoAnalyst’ based on R statistical software to process and analyse dendrometer data using various approaches. This package offers algorithms for handling and pre-cleaning of dendrometer data before the application of subsequent data analytical steps. This includes identifying and erasing artefacts in dendrometer datasets not related to actual stem circumference change, identifying data gaps within records, and the possibility of change(s) in temporal resolution. Furthermore, the package can calculate different daily statistics of dendrometer data, including the daily amplitude of tree growth. The package dendRoAnalyst is therefore intended to facilitate researchers with a collection of functions for handling and analysing dendrometer data.  相似文献   

10.
The anomalous mole fraction effect (AMFE) of the K(+) channel in excised patches of the tonoplast of Chara showed a minimum of apparent open-channel current at 20 mM Tl(+) and 230 mM K(+). Time series obtained at a sampling rate of 100 kHz (filter 25 kHz) were analyzed by three methods to find out whether the AMFE results from an effect on gating or on the conductivity of the open state. Fitting the amplitude histograms by a superposition of gaussians showed a broadening in the presence of Tl(+). Dwell-time analysis based on an O-O-C-C-C model failed to evaluate rate constants above the filter frequency. Thus, the absence of any reduction of apparent open-channel current in time series simulated with the evaluated rate constants could not be taken as evidence against the hypothesis of gating. Finally, a direct fit of the measured time series using five different 5-state Hidden Markov models revealed that the presence of Tl(+) changed the rate constants in such a way that the number of transitions into the short-lived open state (30 micros) increased strongly compared to those in the absence of Tl(+). These models explain 25% reduction of apparent single-channel current amplitude through a rapid gating mechanism.  相似文献   

11.
12.
13.
We present an extensive investigation of the accuracy and precision of temporal image correlation spectroscopy (TICS). Using simulations of laser scanning microscopy image time series, we investigate the effect of spatiotemporal sampling, particle density, noise, sampling frequency, and photobleaching of fluorophores on the recovery of transport coefficients and number densities by TICS. We show that the recovery of transport coefficients is usually limited by spatial sampling, while the measurement of accurate number densities is restricted by background noise in an image series. We also demonstrate that photobleaching of the fluorophore causes a consistent overestimation of diffusion coefficients and flow rates, and a severe underestimation of number densities. We derive a bleaching correction equation that removes both of these biases when used to fit temporal autocorrelation functions, without increasing the number of fit parameters. Finally, we image the basal membrane of a CHO cell with EGFP/alpha-actinin, using two-photon microscopy, and analyze a subregion of this series using TICS and apply the bleaching correction. We show that the photobleaching correction can be determined simply by using the average image intensities from the time series, and we use the simulations to provide good estimates of the accuracy and precision of the number density and transport coefficients measured with TICS.  相似文献   

14.

Background

The univariate approaches used to analyze heart rate variability have recently been extended by several bivariate approaches with respect to cardiorespiratory coordination. Some approaches are explicitly based on mathematical models which investigate the synchronization between weakly coupled complex systems. Others use an heuristic approach, i.e. characteristic features of both time series, to develop appropriate bivariate methods.

Objective

In this study six different methods used to analyze cardiorespiratory coordination have been quantitatively compared with respect to their performance (no. of sequences with cardiorespiratory coordination, no. of heart beats coordinated with respiration). Five of these approaches have been suggested in the recent literature whereas one method originates from older studies.

Results

The methods were applied to the simultaneous recordings of an electrocardiogram and a respiratory trace of 20 healthy subjects during night-time sleep from 0:00 to 6:00. The best temporal resolution and the highest number of coordinated heart beats were obtained with the analysis of 'Phase Recurrences'. Apart from the oldest method, all methods showed similar qualitative results although the quantities varied between the different approaches. In contrast, the oldest method detected considerably fewer coordinated heart beats since it only used part of the maximum amount of information available in each recording.

Conclusions

The method of 'Phase Recurrences' should be the method of choice for the detection of cardiorespiratory coordination since it offers the best temporal resolution and the highest number of coordinated sequences and heart beats. Excluding the oldest method, the results of the heuristic approaches may also be interpreted in terms of the mathematical models.
  相似文献   

15.
The possibility that the 24h rhythm output is the composite expression of ultradian oscillators of varying periodicities was examined by assessing the effect of external continuously or pulsed (20-minute) Gonadotropinreleasing hormone (GnRH) infusions on in vitro luteinizing hormone (LH) release patterns from female mouse pituitaries during 38h study spans. Applying stepwise analyses (spectral, cosine fit, best-fit curve, and peak detection analyses) revealed the waveform shape of LH release output patterns over time is composed of several ultradian oscillations of different periods. The results further substantiated previous observations indicating the pituitary functions as an autonomous clock. The GnRH oscillator functions as a pulse generator and amplitude regulator, but it is not the oscillator that drives the ultradian LH release rhythms. At different stages of the estrus cycle, the effect of GnRH on the expression of ultradian periodicities varies, resulting in the modification of their amplitudes but not their periods. The functional output from the system of ultradian oscillators may superimpose a “circadian or infradian phenotype” on the observed secretion pattern. An “amplitude control” hypothesis is proposed: The temporal pattern of LH release is governed by several oscillators that function in conjunction with one another and are regulated by an amplitude-controlled mechanism. Simulated models show that such a mechanism results in better adaptive response to environmental requirements than does a single circadian oscillator. (Chronobiology International, 18(3), 399-412, 2001)  相似文献   

16.
The selective autophagic removal of mitochondria called mitophagy is an essential physiological signaling for clearing damaged mitochondria and thus maintains the functional integrity of mitochondria and cells. Defective mitophagy is implicated in several diseases, placing mitophagy as a target for drug development. The identification of key regulators of mitophagy as well as chemical modulators of mitophagy requires sensitive and reliable quantitative approaches. Since mitophagy is a rapidly progressing event and sub-microscopic in nature, live cell image-based detection tools with high spatial and temporal resolution is preferred over end-stage assays. We describe two approaches for measuring mitophagy in mammalian cells using stable cells expressing EGFP-LC3 – Mito-DsRed to mark early phase of mitophagy and Mitochondria-EGFP – LAMP1-RFP stable cells for late events of mitophagy. Both the assays showed good spatial and temporal resolution in wide-field, confocal and super-resolution microscopy with high-throughput adaptable capability. A limited compound screening allowed us to identify a few new mitophagy inducers. Compared to the current mitophagy tools, mito-Keima or mito-QC, the assay described here determines the direct delivery of mitochondrial components to the lysosome in real time mode with accurate quantification if monoclonal cells expressing a homogenous level of both probes are established. Since the assay described here employs real-time imaging approach in a high-throughput mode, the platform can be used both for siRNA screening or compound screening to identify key regulators of mitophagy at decisive stages.  相似文献   

17.
A maximum likelihood (ML)-based approach has been established for the direct extraction of NMR parameters (e.g., frequency, amplitude, phase, and decay rate) simultaneously from all dimensions of a D-dimensional NMR spectrum. The approach, referred to here as HTFD-ML (hybrid time frequency domain maximum likelihood), constructs a time-domain model composed of a sum of exponentially-decaying sinusoidal signals. The apodized Fourier transform of this time-domain signal is a model spectrum that represents the best fit to the equivalent frequency-domain data spectrum. The desired amplitude and frequency parameters can be extracted directly from the signal model constructed by the HTFD-ML algorithm. The HTFD-ML approach presented here, as embodied in the software package CHIFIT, is designed to meet the challenges posed by model fitting of D-dimensional NMR data sets, where each consists of many data points (108 is not uncommon) encoding information about numerous signals (up to 105 for a protein of moderate size) that exhibit spectral overlap. The suitability of the approach is demonstrated by its application to the concerted analysis of a series of ten 2D 1H-15N HSQC experiments measuring 15N T1 relaxation. In addition to demonstrating the practicality of performing maximum likelihood analysis on large, multidimensional NMR spectra, the results demonstrate that this parametric model-fitting approach provides more accurate amplitude and frequency estimates than those obtained from conventional peak-based analysis of the FT spectrum. The improved performance of the model fitting approach derives from its ability to take into account the simultaneous contributions of all signals in a crowded spectral region (deconvolution) as well as to incorporate prior knowledge in constructing models to fit the data.  相似文献   

18.
An  Shaokun  Ma  Liang  Wan  Lin 《BMC genomics》2019,20(2):77-92
Background

Time series single-cell RNA sequencing (scRNA-seq) data are emerging. However, the analysis of time series scRNA-seq data could be compromised by 1) distortion created by assorted sources of data collection and generation across time samples and 2) inheritance of cell-to-cell variations by stochastic dynamic patterns of gene expression. This calls for the development of an algorithm able to visualize time series scRNA-seq data in order to reveal latent structures and uncover dynamic transition processes.

Results

In this study, we propose an algorithm, termed time series elastic embedding (TSEE), by incorporating experimental temporal information into the elastic embedding (EE) method, in order to visualize time series scRNA-seq data. TSEE extends the EE algorithm by penalizing the proximal placement of latent points that correspond to data points otherwise separated by experimental time intervals. TSEE is herein used to visualize time series scRNA-seq datasets of embryonic developmental processed in human and zebrafish. We demonstrate that TSEE outperforms existing methods (e.g. PCA, tSNE and EE) in preserving local and global structures as well as enhancing the temporal resolution of samples. Meanwhile, TSEE reveals the dynamic oscillation patterns of gene expression waves during zebrafish embryogenesis.

Conclusions

TSEE can efficiently visualize time series scRNA-seq data by diluting the distortions of assorted sources of data variation across time stages and achieve the temporal resolution enhancement by preserving temporal order and structure. TSEE uncovers the subtle dynamic structures of gene expression patterns, facilitating further downstream dynamic modeling and analysis of gene expression processes. The computational framework of TSEE is generalizable by allowing the incorporation of other sources of information.

  相似文献   

19.
Currently applied three-copartment models for analyzing kinetic data derived fromin vivo positron emission tomographic (PET) studies of radioligand-neuroreceptor interactions require assumptions which may not be strictly valid. Such assumptions include very rapid kinetics for nonspecific binding and the absence of multiple specific receptors or subtypes. Computer simulations, based on an exact analytical solution of the relevant differential equations, indicate the numerical errors that can arise when the assumptions are invalid. We propose a fourcompartment model which requires fewer assumptions. A simple relationship is derived for expressing the microscopic rate constants of either the three- or four-compartment model as explicit functions of the experimentally-observed macroscopic rate constants. This could eliminate the need for time-consuming, iterative, non-linear, curve-fitting approaches and numerical integration. The usefulness of the four-compartment model is limited, however, by the sensitivity and temporal resolution of current PET imaging devices.  相似文献   

20.
Cell-type specific gene expression programs are tightly linked to epigenetic modifications on DNA and histone proteins. Here, we used a novel CRISPR-based epigenome editing approach to control gene expression spatially and temporally. We show that targeting dCas9–p300 complex to distal non-regulatory genomic regions reprograms the chromatin state of these regions into enhancer-like elements. Notably, through controlling the spatial distance of these induced enhancers (i-Enhancer) to the promoter, the gene expression amplitude can be tightly regulated. To better control the temporal persistence of induced gene expression, we integrated the auxin-inducible degron technology with CRISPR tools. This approach allows rapid depletion of the dCas9-fused epigenome modifier complex from the target site and enables temporal control over gene expression regulation. Using this tool, we investigated the temporal persistence of a locally edited epigenetic mark and its functional consequences. The tools and approaches presented here will allow novel insights into the mechanism of epigenetic memory and gene regulation from distal regulatory sites.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号