首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
In this paper, we compare nonparametric kernel estimates with smoothed histograms as methods for displaying logarithmically transformed dwell-time distributions. Kernel density plots provide a simpler means for producing estimates of the probability density function (pdf) and they have the advantage of being smoothed in a well-specified, carefully controlled manner. Smoothing is essential for multidimensional plots because, with realistic amounts of data, the number of counts per bin is small. Examples are presented for a 2-dimensional pdf and its associated dependency-difference plot that display the correlations between successive dwell times.  相似文献   

2.
Dwell-time histograms are often plotted as part of patch-clamp investigations of ion channel currents. The advantages of plotting these histograms with a logarithmic time axis were demonstrated by, J. Physiol. (Lond.). 378:141-174), Pflügers Arch. 410:530-553), and, Biophys. J. 52:1047-1054). Sigworth and Sine argued that the interpretation of such histograms is simplified if the counts are presented in a manner similar to that of a probability density function. However, when ion channel records are recorded as a discrete time series, the dwell times are quantized. As a result, the mapping of dwell times to logarithmically spaced bins is highly irregular; bins may be empty, and significant irregularities may extend beyond the duration of 100 samples. Using simple approximations based on the nature of the binning process and the transformation rules for probability density functions, we develop adjustments for the display of the counts to compensate for this effect. Tests with simulated data suggest that this procedure provides a faithful representation of the data.  相似文献   

3.
Patterns that resemble strongly skewed size distributions are frequently observed in ecology. A typical example represents tree size distributions of stem diameters. Empirical tests of ecological theories predicting their parameters have been conducted, but the results are difficult to interpret because the statistical methods that are applied to fit such decaying size distributions vary. In addition, binning of field data as well as measurement errors might potentially bias parameter estimates. Here, we compare three different methods for parameter estimation – the common maximum likelihood estimation (MLE) and two modified types of MLE correcting for binning of observations or random measurement errors. We test whether three typical frequency distributions, namely the power-law, negative exponential and Weibull distribution can be precisely identified, and how parameter estimates are biased when observations are additionally either binned or contain measurement error. We show that uncorrected MLE already loses the ability to discern functional form and parameters at relatively small levels of uncertainties. The modified MLE methods that consider such uncertainties (either binning or measurement error) are comparatively much more robust. We conclude that it is important to reduce binning of observations, if possible, and to quantify observation accuracy in empirical studies for fitting strongly skewed size distributions. In general, modified MLE methods that correct binning or measurement errors can be applied to ensure reliable results.  相似文献   

4.
Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems.  相似文献   

5.
Cardiomyocytes have multiple Ca(2+) fluxes of varying duration that work together to optimize function (1,2). Changes in Ca(2+) activity in response to extracellular agents is predominantly regulated by the phospholipase Cβ- Gα(q;) pathway localized on the plasma membrane which is stimulated by agents such as acetylcholine (3,4). We have recently found that plasma membrane protein domains called caveolae(5,6) can entrap activated Gα(q;)(7). This entrapment has the effect of stabilizing the activated state of Gα(q;) and resulting in prolonged Ca(2+) signals in cardiomyocytes and other cell types(8). We uncovered this surprising result by measuring dynamic calcium responses on a fast scale in living cardiomyocytes. Briefly, cells are loaded with a fluorescent Ca(2+) indicator. In our studies, we used Ca(2+) Green (Invitrogen, Inc.) which exhibits an increase in fluorescence emission intensity upon binding of calcium ions. The fluorescence intensity is then recorded for using a line-scan mode of a laser scanning confocal microscope. This method allows rapid acquisition of the time course of fluorescence intensity in pixels along a selected line, producing several hundreds of time traces on the microsecond time scale. These very fast traces are transferred into excel and then into Sigmaplot for analysis, and are compared to traces obtained for electronic noise, free dye, and other controls. To dissect Ca(2+) responses of different flux rates, we performed a histogram analysis that binned pixel intensities with time. Binning allows us to group over 500 traces of scans and visualize the compiled results spatially and temporally on a single plot. Thus, the slow Ca(2+) waves that are difficult to discern when the scans are overlaid due to different peak placement and noise, can be readily seen in the binned histograms. Very fast fluxes in the time scale of the measurement show a narrow distribution of intensities in the very short time bins whereas longer Ca(2+) waves show binned data with a broad distribution over longer time bins. These different time distributions allow us to dissect the timing of Ca(2+)fluxes in the cells, and to determine their impact on various cellular events.  相似文献   

6.
Individual growth is an important parameter and is linked to a number of other biological processes. It is commonly modeled using the von Bertalanffy growth function (VBGF), which is regularly fitted to age data where the ages of the animals are not known exactly but are binned into yearly age groups, such as fish survey data. Current methods of fitting the VBGF to these data treat all the binned ages as the actual ages. We present a new VBGF model that combines data from multiple surveys and allows the actual age of an animal to be inferred. By fitting to survey data for Atlantic herring (Clupea harengus) and Atlantic cod (Gadus morhua), we compare our model with two other ways of combining data from multiple surveys but where the ages are as reported in the survey data. We use the fitted parameters as inputs into a yield‐per‐recruit model to see what would happen to advice given to management. We found that each of the ways of combining the data leads to different parameter estimates for the VBGF and advice for policymakers. Our model fitted to the data better than either of the other models and also reduced the uncertainty in the parameter estimates and models used to inform management. Our model is a robust way of fitting the VBGF and can be used to combine data from multiple sources. The model is general enough to fit other growth curves for any taxon when the age of individuals is binned into groups.  相似文献   

7.
Three methods of standardizing magnitude estimation data, external calibration, modulus normalization, and equalization were examined using a sensory evaluation data set arising from an incomplete block experiment testing five gels of varying firmness. Both the original data and a logarithmic transformation of the data were analyzed. Instrumental data were also collected. When untransformed data were analyzed the method of standardization profoundly affected tests of significance, coefficients of variation (%CV), and estimation of the power function relating the sensory data to the concentration of the underlying gel. The logarithmically transformed data lead to results independent of the standardizing technique and with higher F-ratios, lower %CV's and normally distributed errors.  相似文献   

8.
Palo K  Mets U  Loorits V  Kask P 《Biophysical journal》2006,90(6):2179-2191
Fitting of photon-count number histograms is a way of analysis of fluorescence intensity fluctuations, a successor to fluorescence correlation spectroscopy. First versions of the theory for calculating photon-count number distributions have assumed constant emission intensity by a molecule during a counting time interval. For a long time a question has remained unanswered: to what extent is this assumption violated in experiments? Here we present a theory of photon-count number distributions that takes account of intensity fluctuations during a counting time interval. Theoretical count-number distributions are calculated via a numerical solution of Master equations (ME), which is a set of differential equations describing diffusion, singlet-triplet transitions, and photon emission. Detector afterpulsing and dead-time corrections are also included. The ME-theory is tested by fitting a series of photon-count number histograms corresponding to different lengths of the counting time interval. Compared to the first version of fluorescence intensity multiple distribution analysis theory introduced in 2000, the fit quality is significantly improved. It is discussed how a theory of photon-count number distributions, which assumes constant emission intensity during a counting time interval, may also yield a good fit quality. We argue that the spatial brightness distribution used in calculations of the fit curve is not the true spatial brightness distribution. Instead, a number of dynamic processes, which cause fluorescence intensity fluctuations, are indirectly taken into account via the profile adjustment parameters.  相似文献   

9.
Neuronal synchronization is often associated with small time delays, and these delays can change as a function of stimulus properties. Investigation of time delays can be cumbersome if the activity of a large number of neurons is recorded simultaneously and neuronal synchronization is measured in a pairwise manner (such as the cross-correlation histograms) because the number of pairwise measurements increases quadratically. Here, a non-parametric statistical test is proposed with which one can investigate (i) the consistency of the delays across a large number of pairwise measurements and (ii) the consistency of the changes in the time delays as a function of experimental conditions. The test can be classified as non-parametric because it takes into account only the directions of the delays and thus, does not make assumptions about the distributions and the variances of the measurement errors.  相似文献   

10.
Ribonucleic acid (RNA) molecules play important roles in a variety of biological processes. To properly function, RNA molecules usually have to fold to specific structures, and therefore understanding RNA structure is vital in comprehending how RNA functions. One approach to understanding and predicting biomolecular structure is to use knowledge-based potentials built from experimentally determined structures. These types of potentials have been shown to be effective for predicting both protein and RNA structures, but their utility is limited by their significantly rugged nature. This ruggedness (and hence the potential's usefulness) depends heavily on the choice of bin width to sort structural information (e.g. distances) but the appropriate bin width is not known a priori. To circumvent the binning problem, we compared knowledge-based potentials built from inter-atomic distances in RNA structures using different mixture models (Kernel Density Estimation, Expectation Minimization and Dirichlet Process). We show that the smooth knowledge-based potential built from Dirichlet process is successful in selecting native-like RNA models from different sets of structural decoys with comparable efficacy to a potential developed by spline-fitting - a commonly taken approach - to binned distance histograms. The less rugged nature of our potential suggests its applicability in diverse types of structural modeling.  相似文献   

11.
Summary Theoretical flow karyotypes from both plant and mammalian species have been simply modelled using computer spreadsheet software. The models are based upon published values of relative DNA content or relative lengths of each of the chromosomes. From such data, the histograms of chromosome distribution have been simulated for both linear and logarithmic modules of a flow cytometer, and as a function of the coefficient of variation. Simulated and experimental histograms are compard for Nicotiana plumbaginifolia. This readily accessible exercise facilitates the planning and execution of flow cytometric analysis and sorting of chromosomes.  相似文献   

12.
The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed.  相似文献   

13.
1. Lévy flights are specialized random walks with fundamental properties such as superdiffusivity and scale invariance that have recently been applied in optimal foraging theory. Lévy flights have movement lengths chosen from a probability distribution with a power-law tail, which theoretically increases the chances of a forager encountering new prey patches and may represent an optimal solution for foraging across complex, natural habitats. 2. An increasing number of studies are detecting Lévy behaviour in diverse organisms such as microbes, insects, birds, and mammals including humans. A principal method for detecting Lévy flight is whether the exponent (micro) of the power-law distribution of movement lengths falls within the range 1 < micro < or = 3. The exponent can be determined from the histogram of frequency vs. movement (step) lengths, but different plotting methods have been used to derive the Lévy exponent across different studies. 3. Here we investigate using simulations how different plotting methods influence the micro-value and show that the power-law plotting method based on 2(k) (logarithmic) binning with normalization prior to log transformation of both axes yields low error (1.4%) in identifying Lévy flights. Furthermore, increasing sample size reduced variation about the recovered values of micro, for example by 83% as sample number increased from n = 50 up to 5000. 4. Simple log transformation of the axes of the histogram of frequency vs. step length underestimated micro by c.40%, whereas two other methods, 2(k) (logarithmic) binning without normalization and calculation of a cumulative distribution function for the data, both estimate the regression slope as 1-micro. Correction of the slope therefore yields an accurate Lévy exponent with estimation errors of 1.4 and 4.5%, respectively. 5. Empirical reanalysis of data in published studies indicates that simple log transformation results in significant errors in estimating micro, which in turn affects reliability of the biological interpretation. The potential for detecting Lévy flight motion when it is not present is minimized by the approach described. We also show that using a large number of steps in movement analysis such as this will also increase the accuracy with which optimal Lévy flight behaviour can be detected.  相似文献   

14.
MOTIVATION: A number of community profiling approaches have been widely used to study the microbial community composition and its variations in environmental ecology. Automated Ribosomal Intergenic Spacer Analysis (ARISA) is one such technique. ARISA has been used to study microbial communities using 16S-23S rRNA intergenic spacer length heterogeneity at different times and places. Owing to errors in sampling, random mutations in PCR amplification, and probably mostly variations in readings from the equipment used to analyze fragment sizes, the data read directly from the fragment analyzer should not be used for down stream statistical analysis. No optimal data preprocessing methods are available. A commonly used approach is to bin the reading lengths of the 16S-23S intergenic spacer. We have developed a dynamic programming algorithm based binning method for ARISA data analysis which minimizes the overall differences between replicates from the same sampling location and time. RESULTS: In a test example from an ocean time series sampling program, data preprocessing identified several outliers which upon re-examination were found to be because of systematic errors. Clustering analysis of the ARISA from different times based on the dynamic programming algorithm binned data revealed important features of the biodiversity of the microbial communities.  相似文献   

15.
16.
体感皮层神经元放电间隔的概率密度函数与分布参数   总被引:2,自引:0,他引:2  
本文建立了估计ISI概率密度函数的标准化ISI直方图和分布参数拟合方法,对34例猫体感皮层神经元自发和诱发放电活动进行了统计分析.  相似文献   

17.
We describe a FORTRAN computer program for fitting the logistic distribution function: (formula: see text) Where x represents dose or time, to dose-response data. The program determines both weighted least squares and maximum likelihood estimates for the parameters alpha and beta. It also calculates the standard errors of alpha and beta under both estimation methods, as well as the median lethal dose (LD50) and its standard error. Dose--response curves found by both fitting methods can be plotted as well as the 95% confidence bands for these lines.  相似文献   

18.
Visualization tools that allow both optimization of instrument''s parameters for data acquisition and specific quality control (QC) for a given sample prior to time-consuming database searches have been scarce until recently and are currently still not freely available. To address this need, we have developed the visualization tool LogViewer, which uses diagnostic data from the RAW files of the Thermo Orbitrap and linear trap quadrupole-Fourier transform (LTQ-FT) mass spectrometers to monitor relevant metrics. To summarize and visualize the performance on our test samples, log files from RawXtract are imported and displayed. LogViewer is a visualization tool that allows a specific and fast QC for a given sample without time-consuming database searches. QC metrics displayed include: mass spectrometry (MS) ion-injection time histograms, MS ion-injection time versus retention time, MS2 ion-injection time histograms, MS2 ion-injection time versus retention time, dependent scan histograms, charge-state histograms, mass-to-charge ratio (M/Z) distributions, M/Z histograms, mass histograms, mass distribution, summary, repeat analyses, Raw MS, and Raw MS2. Systematically optimizing all metrics allowed us to increase our protein identification rates from 600 proteins to routinely determine up to 1400 proteins in any 160-min analysis of a complex mixture (e.g., yeast lysate) at a false discovery rate of <1%. Visualization tools, such as LogViewer, make QC of complex liquid chromotography (LC)-MS and LC-MS/MS data and optimization of the instrument''s parameters accessible to users.  相似文献   

19.
Vegetatively growing amoebae, if shaken in a starvation (nonnutrient) buffer, acquired aggregation competence, but do not embark on a morphogenetic program. The quantitative variation of ribosomal proteins in vegetative and aggregation-competent cells was compared by labeling the different cell types with [35S]methionine. Vegetative cells were examined at various phases of the growth cycle. No changes could be detected in the content of ribosomes or the apparent stoichiometry of ribosomal proteins in growing cells. In stationary phase cells, the net ribosome content declined to 15% of that observed in logarithmic phase, but the relative amounts of individual ribosomal proteins were not altered. Although aggregation-competent cells contained 30% less ribosomes compared with logarithmic phase cells, the total fraction of newly made ribosomal proteins was the same in both. In contrast to vegetative cells, distinct changes were induced in the ribosomal proteins of aggregation-competent cells. The composition of ribosomes in aggregation-competent phase resembled in every respect that observed in spore cells. As reported earlier, changes were found in all 12 of the developmentally regulated ribosomal proteins. For the majority of newly made ribosomal proteins during aggregation competence, the stoichiometry was similar to that in logarithmically growing cells. However, the relative synthesis of some was particularly higher (13- to 46-fold for A and L; 3- to 8-fold for D, E, S24, L3, S6, and L4) compared with logarithmic phase cells. About 18 proteins, which included the cell-specific ribosomal proteins L18, S10, S14, S16, and L11, were synthesized in lesser amounts than in logarithmic phase cells.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

20.
A computer simulation routine has been made to calculate the DNA distributions of exponentially growing cultures of Escherichia coli. Calculations were based on a previously published model (S. Cooper and C.E. Helmstetter, J. Mol. Biol. 31:519-540, 1968). Simulated distributions were compared with experimental DNA distributions (histograms) recorded by flow cytometry. Cell cycle parameters were determined by varying the parameters to find the best fit of theoretical to experimental histograms. A culture of E. coli B/r A with a doubling time of 27 min was found to have a DNA replication period (C) of 43 min and an average postreplication period (D) of 22 to 23 min. Similar cell cycle parameters were found for a 60-min B/r A culture. Initiations of DNA replication at multiple origins in one and the same cell were shown to be essentially synchronous. A slowly growing B/r A culture (doubling time, 5.5 h) had an average prereplication period (B) of 2.3 h; C = 2.4 h and D = 0.8 h. It was concluded the the C period has a constant duration of 43 min (at 37 degrees C) at fast growth rates (doubling times, less than 1 h) but increases at slow growth rates. Thus, our results obtained with unperturbed exponential cultures in steady state support the model of Cooper and Helmstetter which was based on data obtained with synchronized cells.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号