首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A numerical method for deriving the fractions of cells in different phases of the cell cycle from a single observed DNA histogram is presented. The observed histogram is regarded as a polluted version (containing allocation errors) of the true histogram. A mathematical model is used to describe the pollution process. A theoretical histogram, representing the true histogram, is constructed so that G1 cells are put into one channel and G2M cells into another; the distribution of S cells in between is approximated with a set of harmonic functions. This theoretical histogram is subsequently disturbed with Gaussian dispersion functions to stimulate the pollution, yielding a predicted histogram. Using a maximum likelihood estimation technique, the model parameters are adjusted iteratively, matching the predicted histogram to the actually observed one. With the final parameter values substituted, the corresponding final theoretical histogram is regarded as a reliable reconstruction of the true histogram. From the latter, the required percentages can be read directly. The advantage of this approach over other mathematical analysis methods is that it allows a wide range of different, continuous distributions for relatively few model parameters (thus featuring flexibility and realism and a diminished risk of encountering computational problems). In addition, estimation errors providing a measure of accuracy can be obtained. To test the method, it was used to analyze various observed histograms from the literature that have been obtained by either simulation or actual flow cytometric measurements. The method appeared to perform well, as compared to the reported results of several other methods of analysis applied to the same data.  相似文献   

2.
The ability of four different mathematical models of the DNA histogram to give accurate estimates for the fractions of cells in G1, S, and G2 + M has been investigated. The models studied differ in the form and number of parameters of the function used to represent cells in S-phase. Results obtained from simulated DNA histograms suggest that the standard deviations of the model parameters increase exponentially with the width of the G1 and G2 + M peaks of the histogram. Error analysis is presented as a method to select a model of optimal complexity in relation to the resolution provided by the data in a given set of DNA histograms. Introduction of additional parameters improves the agreement between model and data but may result in a less well-posed model. A model with an optimal number of parameters can therefore be found that will yield parameter estimates with the smallest possible standard deviations.  相似文献   

3.
The agreement between humans and algorithms on whether an event-related potential (ERP) is present or not and the level of variation in the estimated values of its relevant features are largely unknown. Thus, the aim of this study was to determine the categorical and quantitative agreement between manual and automated methods for single-trial detection and estimation of ERP features. To this end, ERPs were elicited in sixteen healthy volunteers using electrical stimulation at graded intensities below and above the nociceptive withdrawal reflex threshold. Presence/absence of an ERP peak (categorical outcome) and its amplitude and latency (quantitative outcome) in each single-trial were evaluated independently by two human observers and two automated algorithms taken from existing literature. Categorical agreement was assessed using percentage positive and negative agreement and Cohen’s κ, whereas quantitative agreement was evaluated using Bland-Altman analysis and the coefficient of variation. Typical values for the categorical agreement between manual and automated methods were derived, as well as reference values for the average and maximum differences that can be expected if one method is used instead of the others. Results showed that the human observers presented the highest categorical and quantitative agreement, and there were significantly large differences between detection and estimation of quantitative features among methods. In conclusion, substantial care should be taken in the selection of the detection/estimation approach, since factors like stimulation intensity and expected number of trials with/without response can play a significant role in the outcome of a study.  相似文献   

4.
Melting temperatures, T(m), were systematically studied for a set of 92 DNA duplex oligomers in a variety of sodium ion concentrations ranging from 69 mM to 1.02 M. The relationship between T(m) and ln [Na(+)] was nonlinear over this range of sodium ion concentrations, and the observed melting temperatures were poorly predicted by existing algorithms. A new empirical relationship was derived from UV melting data that employs a quadratic function, which better models the melting temperatures of DNA duplex oligomers as sodium ion concentration is varied. Statistical analysis shows that this improved salt correction is significantly more accurate than previously suggested algorithms and predicts salt-corrected melting temperatures with an average error of only 1.6 degrees C when tested against an independent validation set of T(m) measurements obtained from the literature. Differential scanning calorimetry studies demonstrate that this T(m) salt correction is insensitive to DNA concentration. The T(m) salt correction function was found to be sequence-dependent and varied with the fraction of G.C base pairs, in agreement with previous studies of genomic and polymeric DNAs. The salt correction function is independent of oligomer length, suggesting that end-fraying and other end effects have little influence on the amount of sodium counterions released during duplex melting. The results are discussed in the context of counterion condensation theory.  相似文献   

5.
The quality of fit of sedimentation velocity data is critical to judge the veracity of the sedimentation model and accuracy of the derived macromolecular parameters. Absolute statistical measures are usually complicated by the presence of characteristic systematic errors and run-to-run variation in the stochastic noise of data acquisition. We present a new graphical approach to visualize systematic deviations between data and model in the form of a histogram of residuals. In comparison with the ideally expected Gaussian distribution, it can provide a robust measure of fit quality and be used to flag poor models.  相似文献   

6.
A stable propidium iodide staining procedure for flow cytometry   总被引:8,自引:0,他引:8  
A propidium iodide (PI) staining procedure is described in which 50 micrograms/ml PI in 10(-2) M Tris, pH 7.0, with 5 mM MgCl2 is used to stain murine erythroleukemia cells (MELC) grown in suspension culture as well as single cell suspensions derived from rat kidney adenocarcinoma and human prostatic carcinoma. Specificity of staining of nuclear DNA is achieved by enzymatic removal of RNA using RNAse in the staining solution. Virtually identical histograms, with the same G1 peak height and closely similar coefficients of variation (CVs), are obtained using a wide range of RNAse concentrations on replicate samples of MELC if the incubation times are sufficiently prolonged when employing the lower enzyme concentrations. For 1 mg/ml RNAse on logarithmically growing MELC, 30 min incubation at 37 degrees C is needed to obtain a maximum G1 peak height and optimal CV and there is no significant change in the histogram if the incubation is prolonged to 4 hr. For every 4-fold decrease in RNAse concentration, the incubation time at 37 degrees C must be doubled to obtain the same maximal G1 peak height and optimal CV. Unfixed cell preparations, whether derived from suspension or monolayer cultures or from solid tumors, are stable for 2 or more weeks if stored at 4 degrees C between flow cytometric analyses and histograms are usually only minimally altered if the stained cell samples are stored for 1-2 months at 4 degrees C. Sample decay is associated with bacterial contamination. If sterile preparative techniques are used initially, subsequent contamination of the stained preparations may be minimized by adding sodium azide to the stained samples at 0.1% without influencing fluorescence intensity. Glycerine may be added to 10% and the samples slowly frozen for storage without altering DNA histogram shapes. The simplicity of sample preparation and the stability of the resulting stained cell samples makes this procedure suitable for repetitive comparative sampling of tissue and cell populations over prolonged time spans.  相似文献   

7.
Fluorescence intensity calibration was evaluated in a model system for flow cytometers using commercially available fluorescein-labeled microbeads as internal standards and stabilized fluoresceinated thymus cell nuclei (Fluorotrol) as surrogates for stained mononuclear cells. Spectrophotometrically determined calibration values for the microbeads were used to generate a standard curve that converted green fluorescence histogram channels into molecular equivalents of soluble fluorescein (MESF). In 19 analyses repeated during a single run, the coefficients of variation (CVs) for the derived MESF values on both dimly and brightly stained Fluorotrol populations were less than 2%. In 26 separate determinations over 14 weeks, the CVs of the derived MESF values were less than 3%. The MESF values of the dim and bright Fluorotrol populations derived from the microbead standard curves were both about 50% lower than those determined by direct spectrophotometric analysis of Fluorotrol. The analytical imprecision of fluorescence intensity measurements in this idealized model system has a CV less than 3%, and the analytical inaccuracy shows that calibration in MESF units remains uncertain over about a two-fold range.  相似文献   

8.
In the fluorescent-flow cytophotometric measurement of cellular DNA content the DNA distributions usually have two peaks. The second peak, which corresponds to the 4C DNA content of G2 and M cells, is often positioned at lower values of DNA content than twice that of the 2C DNA peak which contains G1 cells. Computerized numerical analyses were performed on artificial DNA distributions in which the proportion of S-phase cells was varied. It was demonstrated that the contribution of late S-phase cells to the 4C DNA peak in the histogram shifts the second peak to a position below twice the 2C DNA value. Also, increasing the coefficient of variation of the DNA measurement shifts the second peak position to lower values. A group of 33 DNA distribution histograms was found to have an average G2/G1 peak position ratio of 1.90, in keeping with typical values obtained from the numerical analysis of the artificial populations.  相似文献   

9.
J C Wood  P Tood 《Cell biophysics》1979,1(3):211-218
In the influencent-flow cytophotometric measurement of cellular DNA content the DNA distributions usually have two peaks. The second peak, which corresponds to the 4C DNA content of G2 and M cells, is often positioned at lower values of DNA content than twice that of the 2C DNA peak which contains G1 cells. Computerized numerical analyses were performed on artificial DNA distributions in which the proportion of S-phase cells was varied. It was demonstrated that the contribution of late S-phase cells to the 4C DNA peak in the histogram shifts the second peak to a position below twice the 2C DNA value. Also, increasing the coefficient of variation of the DNA measurement shifts the second peak position to lower values. A group of 33 DNA distribution histograms we found to have an average G2/G1 peak position ratio of 1.90, in keeping with typical values obtained from the numerical analysis of the artificial populations.  相似文献   

10.
Analysis carried out here generalized on earlier studies of chromosomal aberrations in the populations of Hiroshima and Nagasaki, by allowing extrabinomial variation in aberrant cell counts corresponding to within-subject correlations in cell aberrations. Strong within-subject correlations were detected with corresponding standard errors for the average number of aberrant cells that were often substantially larger than was previously assumed. The extrabinomial variation is accommodated in the analysis in the present report, as described in the section on dose-response models, by using a beta-binomial (beta-B) variance structure. It is emphasized that we have generally satisfactory agreement between the observed and the beta-B fitted frequencies by city-dose category. The chromosomal aberration data considered here are not extensive enough to allow a precise discrimination between competing dose-response models.  相似文献   

11.
The measurement of single ion channel kinetics is difficult when those channels exhibit subconductance events. When the kinetics are fast, and when the current magnitudes are small, as is the case for Na+, Ca2+, and some K+ channels, these difficulties can lead to serious errors in the estimation of channel kinetics. I present here a method, based on the construction and analysis of mean-variance histograms, that can overcome these problems. A mean-variance histogram is constructed by calculating the mean current and the current variance within a brief "window" (a set of N consecutive data samples) superimposed on the digitized raw channel data. Systematic movement of this window over the data produces large numbers of mean-variance pairs which can be assembled into a two-dimensional histogram. Defined current levels (open, closed, or sublevel) appear in such plots as low variance regions. The total number of events in such low variance regions is estimated by curve fitting and plotted as a function of window width. This function decreases with the same time constants as the original dwell time probability distribution for each of the regions. The method can therefore be used: 1) to present a qualitative summary of the single channel data from which the signal-to-noise ratio, open channel noise, steadiness of the baseline, and number of conductance levels can be quickly determined; 2) to quantify the dwell time distribution in each of the levels exhibited. In this paper I present the analysis of a Na+ channel recording that had a number of complexities. The signal-to-noise ratio was only about 8 for the main open state, open channel noise, and fast flickers to other states were present, as were a substantial number of subconductance states. "Standard" half-amplitude threshold analysis of these data produce open and closed time histograms that were well fitted by the sum of two exponentials, but with apparently erroneous time constants, whereas the mean-variance histogram technique provided a more credible analysis of the open, closed, and subconductance times for the patch. I also show that the method produces accurate results on simulated data in a wide variety of conditions, whereas the half-amplitude method, when applied to complex simulated data shows the same errors as were apparent in the real data. The utility and the limitations of this new method are discussed.  相似文献   

12.
A variety of inverse kinematics (IK) algorithms exist for estimating postures and displacements from a set of noisy marker positions, typically aiming to minimize IK errors by distributing errors amongst all markers in a least-squares (LS) sense. This paper describes how Bayesian inference can contrastingly be used to maximize the probability that a given stochastic kinematic model would produce the observed marker positions. We developed Bayesian IK for two planar IK applications: (1) kinematic chain posture estimates using an explicit forward kinematics model, and (2) rigid body rotation estimates using implicit kinematic modeling through marker displacements. We then tested and compared Bayesian IK results to LS results in Monte Carlo simulations in which random marker error was introduced using Gaussian noise amplitudes ranging uniformly between 0.2 mm and 2.0 mm. Results showed that Bayesian IK was more accurate than LS-IK in over 92% of simulations, with the exception of one center-of-rotation coordinate planar rotation, for which Bayesian IK was more accurate in only 68% of simulations. Moreover, while LS errors increased with marker noise, Bayesian errors were comparatively unaffected by noise amplitude. Nevertheless, whereas the LS solutions required average computational durations of less than 0.5 s, average Bayesian IK durations ranged from 11.6 s for planar rotation to over 2000 s for kinematic chain postures. These results suggest that Bayesian IK can yield order-of-magnitude IK improvements for simple planar IK, but also that its computational demands may make it impractical for some applications.  相似文献   

13.
Mechanical modelling of the musculoskeletal system is dependent upon information regarding the bony attachments of the relevant muscles; in order to study the biomechanics of the shoulder girdle the authors have identified the muscle attachments in three embalmed cadavers. A simple biplanar radiographic technique was then used to determine the attachment coordinates using frames of reference defined for each bone. This technique, using hand positioning without special fixtures was believed to be sufficiently accurate, bearing in mind the likely degree of biological variation. In order to test this assumption, the accuracy of the technique has been studied by measuring the agreement between the two measurements of the common coordinate in the pairs of radiographs. it was found that for the trunk, the errors in the common coordinate were always less than the natural variation; for the scapula they were of a similar magnitude but, for the humerus, the measurement errors frequently exceeded the variation in the coordinates of muscle attachments. It was concluded that, in general, uncalibrated biplanar radiography was sufficiently accurate for the determination of the spatial coordinates of muscle attachments.  相似文献   

14.
We present a single virion method to determine absolute distributions of copy number in the protein composition of viruses and apply it to herpes simplex virus type 1. Using two-color coincidence fluorescence spectroscopy, we determine the virion-to-virion variability in copy numbers of fluorescently labeled tegument and envelope proteins relative to a capsid protein by analyzing fluorescence intensity ratios for ensembles of individual dual-labeled virions and fitting the resulting histogram of ratios. Using EYFP-tagged capsid protein VP26 as a reference for fluorescence intensity, we are able to calculate the mean and also, for the first time to our knowledge, the variation in numbers of gD, VP16, and VP22 tegument. The measurement of the number of glycoprotein D molecules was in good agreement with independent measurements of average numbers of these glycoproteins in bulk virus preparations, validating the method. The accuracy, straightforward data processing, and high throughput of this technique make it widely applicable to the analysis of the molecular composition of large complexes in general, and it is particularly suited to providing insights into virus structure, assembly, and infectivity.  相似文献   

15.
Statistical models of species' distributions rely on data on species' occupancy, or use, of sites across space and/or time. For rare or cryptic species, indirect signs, such as dung, may be the only realistic means of determining their occupancy status across broad spatial extents. However, the consequences of sign decay for errors in estimates of occupancy have not previously been considered. If signs decay very rapidly, then false‐negative errors may occur because signs at an occupied site have decayed by the time it is surveyed. On the other hand, if signs decay very slowly, false‐positive errors may occur because signs remain present at sites that are no longer occupied. We addressed this issue by quantifying, as functions of sign decay and accumulation rates: 1) the false‐negative error rate due to sign decay and, 2) the expected time interval prior to a survey within which signs indicate the species was present; as this time interval increases, false‐positives become more likely. We then applied this to the specific example of koala Phascolarctos cinereus occupancy derived from faecal pellet surveys using data on faecal pellet decay rates. We show that there is a clear trade‐off between false‐negative error rates and the potential for false‐positive errors. For the koala case study, false‐negative errors were low on average and the expected time interval prior to surveys that detected pellets indicate the species was present within less than 2–3 yr. However, these quantities showed quite substantial spatial variation that could lead to biased parameter estimates for distribution models based on faecal pellet surveys. This highlights the importance of observation errors arising from sign decay and we suggest some modifications to existing methods to deal with this issue.  相似文献   

16.
17.
Recent controversies surrounding models of modern human origins have focused on among-group variation, particularly the reconstruction of phylogenetic trees from mitochondrial DNA (mtDNA) and, the dating of population divergence. Problems in tree estimation have been seen as weakening the case for a replacement model and favoring a multiregional evolution model. There has been less discussion of patterns of within-group variation, although the mtDNA evidence has consistently shown the greatest diversity within African populations. Problems of interpretation abound given the numerous factors that can influence within-group variation, including the possibility of earlier divergence, differences in population size, patterns of population expansion, and variation in migration rates. We present a model of within-group phenotypic variation and apply it to a large set of craniometric data representing major Old World geographic regions (57 measurements for 1,159 cases in four regions: Europe, Sub-Saharan Africa, Australasia, and the Far East). The model predicts a linear relationship between variation within populations (the average within-group variance) and variation between populations (the genetic distance of populations to pooled phenotypic means). On a global level this relationship should hold if the long-term effective population sizes of each region are correctly specified. Other potential effects on withingroup variation are accounted for by the model. Comparison of observed and expected variances under the assumption of equal effective sizes for four regions indicates significantly greater within-group variation in Africa and significantly less within-group variation in Europe. These results suggest that the long-term effective population size was greatest in Africa. Closer examination of the model suggests that the long-term African effective size was roughly three times that of any other geographic region. Using these estimates of relative population size, we present a method for analyzing ancient population structure, which provides estimates of ancient migration. This method allows us to reconstruct migration history between geographic regions after adjustment for the effect of genetic drift on interpopulational distances. Our results show a clear isolation of Africa from other regions. We then present a method that allows direct estimation of the ancient migration matrix, thus providing us with information on the actual extent of interregional migration. These methods also provide estimates of time frames necessary to reach genetic equilibrium. The ultimate goal is extracting as much information from present-day patterns of human variation relevannt to issues of human origins. Our results are in agreement with mismatch distribution analysis of mtDNA, and they support a “weak Garden o Eden” model. In this model, modern-day variation can be explained by divergence from an initial source (perhaps Africa) into a number o small isolated populations, followed by later population expansion throughout our species. The major populationn expansions of Homo sapiens during and after the late Pleistocene have had the effect of “freezing” ancient patterns of population structure. While this is not the only possible scenario, we do note the close agreement with ecent analyses of mtDNA mismatch distibutions. © 1994 Wiley-Liss, Inc.  相似文献   

18.
How children acquire knowledge of verb inflection is a long-standing question in language acquisition research. In the present study, we test the predictions of some current constructivist and generativist accounts of the development of verb inflection by focusing on data from two Spanish-speaking children between the ages of 2;0 and 2;6. The constructivist claim that children’s early knowledge of verb inflection is only partially productive is tested by comparing the average number of different inflections per verb in matched samples of child and adult speech. The generativist claim that children’s early use of verb inflection is essentially error-free is tested by investigating the rate at which the children made subject-verb agreement errors in different parts of the present tense paradigm. Our results show: 1) that, although even adults’ use of verb inflection in Spanish tends to look somewhat lexically restricted, both children’s use of verb inflection was significantly less flexible than that of their caregivers, and 2) that, although the rate at which the two children produced subject-verb agreement errors in their speech was very low, this overall error rate hid a consistent pattern of error in which error rates were substantially higher in low frequency than in high frequency contexts, and substantially higher for low frequency than for high frequency verbs. These results undermine the claim that children’s use of verb inflection is fully productive from the earliest observable stages, and are consistent with the constructivist claim that knowledge of verb inflection develops only gradually.  相似文献   

19.
Algorithms to predict heelstrike and toeoff times during normal walking using only kinematic data are presented. The accuracy of these methods was compared with the results obtained using synchronized force platform recordings of two subjects walking at a variety of speeds for a total of 12 trials. Using a 60Hz data collection system, the absolute value errors (AVE) in predicting heelstrike averaged 4.7ms, while the AVE in predicting toeoff times averaged 5.6ms. True average errors (negative for an early prediction) were +1.2ms for both heelstrike and toeoff, indicating that no systematic errors occurred. It was concluded that the proposed algorithms provide an easy and reliable method of determining event times during walking when kinematic data are collected, with a considerable improvement in resolution over visual inspection of video records, and could be utilized in conjunction with any 2-D or 3-D kinematic data collection system.  相似文献   

20.
While extracting dynamics parameters from backbone (15)N relaxation measurements in proteins has become routine over the past two decades, it is increasingly recognized that accurate quantitative analysis can remain limited by the potential presence of systematic errors associated with the measurement of (15)N R(1) and R(2) or R(1ρ) relaxation rates as well as heteronuclear (15)N-{(1)H} NOE values. We show that systematic errors in such measurements can be far larger than the statistical error derived from either the observed signal-to-noise ratio, or from the reproducibility of the measurement. Unless special precautions are taken, the problem of systematic errors is shown to be particularly acute in perdeuterated systems, and even more so when TROSY instead of HSQC elements are used to read out the (15)N magnetization through the NMR-sensitive (1)H nucleus. A discussion of the most common sources of systematic errors is presented, as well as TROSY-based pulse schemes that appear free of systematic errors to the level of <1 %. Application to the small perdeuterated protein GB3, which yields exceptionally high S/N and therefore is an ideal test molecule for detection of systematic errors, yields relaxation rates that show considerably less residue by residue variation than previous measurements. Measured R(2)'/R(1)' ratios fit an axially symmetric diffusion tensor with a Pearson's correlation coefficient of 0.97, comparable to fits obtained for backbone amide RDCs to the Saupe matrix.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号