首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Real and simulated experimental data and theoretical data from quantal response experiments were used to make a comparison between the analysis of data from a quantal response experiment and data from a direct enumeration experiment. The method of analysis for each is differentiated, thereby enhancing the utility of the quantal response experiment in sterilization studies. From this comparison it appears that the Stumbo estimate of the D value is biased. Furthermore, the Stumbo estimate depends upon the spore load per replicate in quantal response type experiments, which makes experimental comparisons difficult. Another estimate of D is demonstrated which overcomes some of these shortcomings.  相似文献   

2.
Using the Logit quantal response form as the response function in each step, the original definition of static quantal response equilibrium (QRE) is extended into an iterative evolution process. QREs remain as the fixed points of the dynamic process. However, depending on whether such fixed points are the long-term solutions of the dynamic process, they can be classified into stable (SQREs) and unstable (USQREs) equilibriums. This extension resembles the extension from static Nash equilibriums (NEs) to evolutionary stable solutions in the framework of evolutionary game theory. The relation between SQREs and other solution concepts of games, including NEs and QREs, is discussed. Using experimental data from other published papers, we perform a preliminary comparison between SQREs, NEs, QREs and the observed behavioral outcomes of those experiments. For certain games, we determine that SQREs have better predictive power than QREs and NEs.  相似文献   

3.
A statistical technique is given for fitting the linear-quadratic model to experimental quantal response multifraction data using the time of the response as the end-point. The analysis used is based on the Cox Proportional Hazards model. The technique is useful for late effects where the time of occurrence of the response is dose dependent. The technique is compared to logistic regression analysis and the advantages and disadvantages are discussed. Both methods are applied to a lung pneumonitis experiment and a kidney experiment.  相似文献   

4.
Rosenwald, Albert J. (Fort Detrick, Frederick, Md.), and Ralph E. Lincoln. Pasteurella pestis: growth temperature, virulence, and the graded response. J. Bacteriol. 91:1693-1695. 1966.-A comparison of the virulence of Pasteurella pestis was made by the graded and quantal response methods. Both tests reflected the difference in virulence of cultures grown at three temperatures. Cultures grown at lower temperatures gave the most variable response in virulence tests, and cultures grown at higher temperatures were more virulent. Results from the graded response test were obtained more quickly and more economically than those from the quantal response test.  相似文献   

5.
Several statistical methods, including the conventional technique of Schmidt and Nank, were evaluated for estimating radiation resistance values of various strains of Clostridium botulinum by the use of partial spoilage data from an inoculated ham pack study. Procedures based on quantal response were preferred. The tedious but rigorous probit maximum likelihood determination was used as a standard of comparison. Weibull's graphical treatment was the method of choice because it is simple to utilize, it is mathematically sound, and its ld(50) values agreed closely with the reference standard. In addition, it offers a means for analyzing the type of microbial death kinetics that occur in the pack (exponential, normal, log normal, or mixed distributions), and it predicts the probability of microbial death with any radiation dose used, as well as the dose needed to destroy any given number of organisms, without the need to assume the death pattern of the partial spoilage data. The Weibull analysis indicated a normal type kinetics of death for C. botulinum spores in irradiated cured ham rather than an exponential order of death, as assumed by the Schmidt-Nank formula. The Weibull 12D equivalent of a radiation process, or the minimal radiation dose (MRD), for cured ham was consistently higher than both the experimental sterilizing dose (ESD) and the Schmidt-Nank average MRD. The latter calculation was lower than the ESD in three of the five instances examined, which seems unrealistic. The Spearman-K?rber estimate was favored as the arithmetic technique on the bases of ease of computation, close agreement with the reference method, and providing confidence limits for the ld(50) values.  相似文献   

6.
Biospecific cell adhesion is mediated by receptor-ligand bonds. Early theoretical work presented a deterministic analysis of receptor-mediated cell attachment and detachment for a homogeneous cell population. However, initial comparison of a deterministic framework to experimental detachment profiles of model "cells" (antibody-coated latex beads) did not show qualitative or quantitative agreement (Cozens-Roberts, C., D.A. Lauffenburger, and J.A. Quinn. 1990. Biophys. J. 58:857-872). Hence, we determine the contributions of population heterogeneity and probabilistic binding to the detachment behavior of this experimental system which was designed to minimize experimental and theoretical complications. This work also corrects an error in the numerical solution of the probabilistic model of receptor-mediated cell attachment and detachment developed previously (Cozens-Roberts, C., D.A. Lauffenburger, and J.A. Quinn. 1990. Biophys J. 58:841-856). Measurement of the population distribution of the number of receptors per bead has enabled us to explicitly consider the effect of receptor number heterogeneity within the cell-surface contact area. A deterministic framework that incorporates receptor number heterogeneity qualitatively and quantitatively accounts in large part for the model cell detachment data. Using measured and estimated parameter values for the model cell system, we estimate that about 90% of the observed kinetic detachment behavior originates from heterogeneity effects, while about 10% is due to probabilistic binding effects. In general, these relative contributions may differ for other systems.  相似文献   

7.
Fluctuation analysis of synaptic transmission using the variance-mean approach has been restricted in the past to steady-state responses. Here we extend this method to short repetitive trains of synaptic responses, during which the response amplitudes are not stationary. We consider intervals between trains, long enough so that the system is in the same average state at the beginning of each train. This allows analysis of ensemble means and variances for each response in a train separately. Thus, modifications in synaptic efficacy during short-term plasticity can be attributed to changes in synaptic parameters. In addition, we provide practical guidelines for the analysis of the covariance between successive responses in trains. Explicit algorithms to estimate synaptic parameters are derived and tested by Monte Carlo simulations on the basis of a binomial model of synaptic transmission, allowing for quantal variability, heterogeneity in the release probability, and postsynaptic receptor saturation and desensitization. We find that the combined analysis of variance and covariance is advantageous in yielding an estimate for the number of release sites, which is independent of heterogeneity in the release probability under certain conditions. Furthermore, it allows one to calculate the apparent quantal size for each response in a sequence of stimuli.  相似文献   

8.
Many studies of synaptic transmission have assumed a parametric model to estimate the mean quantal content and size or the effect upon them of manipulations such as the induction of long-term potentiation. Classical tests of fit usually assume that model parameters have been selected independently of the data. Therefore, their use is problematic after parameters have been estimated. We hypothesized that Monte Carlo (MC) simulations of a quantal model could provide a table of parameter-independent critical values with which to test the fit after parameter estimation, emulating Lilliefors's tests. However, when we tested this hypothesis within a conventional quantal model, the empirical distributions of two conventional goodness-of-fit statistics were affected by the values of the quantal parameters, falsifying the hypothesis. Notably, the tests' critical values increased when the combined variances of the noise and quantal-size distributions were reduced, increasing the distinctness of quantal peaks. Our results support two conclusions. First, tests that use a predetermined critical value to assess the fit of a quantal model after parameter estimation may operate at a differing unknown level of significance for each experiment. Second, a MC test enables a valid assessment of the fit of a quantal model after parameter estimation.  相似文献   

9.
A statistical technique is given which can be used to estimate the parameters of the two-component model for cell survival from quantal response multifraction data. The method is a nonlinear logistic regression and relies on a mild assumption relating the probability of death to cell survival level. The method is demonstrated on mouse colon data, where more efficient estimates of the parameters are known, and the agreement is good. Also for some mouse lung LD50 data we obtain estimates of the parameters, and the fit to the data is shown to be better than that of linear-quadratic model.  相似文献   

10.
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.  相似文献   

11.
12.
The differential analysis of genes between microarrays from several experimental conditions or treatments routinely estimates which genes change significantly between groups. As genes are never regulated individually, observed behavior may be a consequence of changes in other genes. Existing approaches like co-expression analysis aim to resolve such patterns from a wide range of experiments. The knowledge of such a background set of experiments can be used to compute expected gene behavior based on known links. It is particularly interesting to detect previously unseen specific effects in other experiments. Here, a new method to spot genes deviating from expected behavior (PAttern DEviation SCOring--Padesco) is devised. It uses linear regression models learned from a background set to arrive at gene specific prediction accuracy distributions. For a given experiment, it is then decided whether each gene is predicted better or worse than expected. This provides a novel way to estimate the experiment specificity of each gene. We propose a validation procedure to estimate the detection of such specific candidates and show that these can be identified with an average accuracy of about 85%.  相似文献   

13.
Data obtained from the early portion of sedimentation velocity experiments may be analyzed to simultaneously estimate both s and s/D. The C versus r data obtained are analyzed using a nonlinear least squares algorithm and an approximate solution to the Lamm equation. This procedure was tested both with simulated noisy data and with experimental data obtained using ribonuclease, ovalbumin, and somatostatin.dodecylsulfate. The procedure assumes that both s and D are independent of concentration. The results suggest that optimal estimation of both s and s/D is obtained at values of (= 2D/(s.omega(2).r(a)(2))) in the range of 0.002 to 0.01 and values of tau(= 2 somega(2)t) less than 0.04. Appropriate selection of rotor speed allows the estimation of both s and s/D for nearly globular macromolecules in the range of 10(4) to 10(6) daltons with data obtained during the first 3000-5000 seconds of a sedimentation velocity experiment.  相似文献   

14.
We describe statistical methods based on the t test that can be conveniently used on high density array data to test for statistically significant differences between treatments. These t tests employ either the observed variance among replicates within treatments or a Bayesian estimate of the variance among replicates within treatments based on a prior estimate obtained from a local estimate of the standard deviation. The Bayesian prior allows statistical inference to be made from microarray data even when experiments are only replicated at nominal levels. We apply these new statistical tests to a data set that examined differential gene expression patterns in IHF(+) and IHF(-) Escherichia coli cells (Arfin, S. M., Long, A. D., Ito, E. T., Tolleri, L., Riehle, M. M., Paegle, E. S., and Hatfield, G. W. (2000) J. Biol. Chem. 275, 29672-29684). These analyses identify a more biologically reasonable set of candidate genes than those identified using statistical tests not incorporating a Bayesian prior. We also show that statistical tests based on analysis of variance and a Bayesian prior identify genes that are up- or down-regulated following an experimental manipulation more reliably than approaches based only on a t test or fold change. All the described tests are implemented in a simple-to-use web interface called Cyber-T that is located on the University of California at Irvine genomics web site.  相似文献   

15.
16.
An overview of published approaches for the metabolic flux control analysis of branch points revealed that often not all fundamental constraints on the flux control coefficients have been taken into account. This has led to contradictory statements in literature on the minimum number of large perturbation experiments required to estimate the complete set of flux control coefficients C(J) for a metabolic branch point. An improved calculation procedure, based on approximate Lin-log reaction kinetics, is proposed, providing explicit analytical solutions of steady state fluxes and metabolite concentrations as a function of large changes in enzyme levels. The obtained solutions allow direct calculation of elasticity ratios from experimental data and subsequently all C(J)-values from the unique relation between elasticity ratio's and flux control coefficients. This procedure ensures that the obtained C(J)-values satisfy all fundamental constraints. From these it follows that for a three enzyme branch point only one characterised or two uncharacterised large flux perturbations are sufficient to obtain all C(J)- values. The improved calculation procedure is illustrated with four experimental cases.  相似文献   

17.
Mouse antibody response to group A streptococcal carbohydrate   总被引:1,自引:0,他引:1  
In an attempt to more fully understand the generation of antibody diversity to carbohydrate (CHO) Ag, we produced and characterized a panel of hybridoma cell lines specific for group A streptococcal CHO from mice injected with the intact bacteria (minus the hyaluronic acid capsule and cell wall protein Ag). We have analyzed the use of H and L chain V region genes in the early (day 7) and late response (hyperimmune) and have sequenced the dominant VH gene used in several of our hybridomas. Our data allowed us to assess the extent to which the recombination of various V, D, and J gene segments and somatic mutation contribute to antibody diversification in this system. In this report we confirm that a minimum of two VH and four VK gene segments are used to encode this response. We extend this analysis to show that multiple D and J gene segments are used and that a significant amount of junctional variability is tolerated in CDR 3. Our results indicate that the level of somatic mutation in the hyperimmune response is generally low in comparison with the response to haptens and protein Ag. These data also suggest that there is a positive selection for mutation in CDR 1 during the hyperimmune response to group A streptococcal CHO.  相似文献   

18.
Large-scale two-dimensional gel experiments have the potential to identify proteins that play an important role in elucidating cell mechanisms and in various stages of drug discovery. Such experiments, typically including hundreds or even thousands of related gels, are notoriously difficult to perform, and analysis of the gel images has until recently been virtually impossible. In this paper we describe a scalable computational model that permits the organization and analysis of a large gel collection. The model is implemented in Compugen's Z4000 system. Gels are organized in a hierarchical, multidimensional data structure that allow the user to view a large-scale experiment as a tree of numerous simpler experiments, and carry out the analysis one step at a time. Analyzed sets of gels form processing units that can be combined into higher level units in an iterative framework. The different conditions at the core of the experiment design, termed the dimensions of the experiment, are transformed from a multidimensional structure to a single hierarchy. The higher level comparison is performed with the aid of a synthetic "adaptor" gel image, called a Raw Master Gel (RMG). The RMG allows the inclusion of data from an entire set of gels to be presented as a gel image, thereby enabling the iterative process. Our model includes a flexible experimental design approach that allows the researcher to choose the condition to be analyzed a posteriori. It also enables data reuse, the performing of several different analysis designs on the same experimental data. The stability and reproducibility of a protein can be analyzed by tracking it up or down the hierarchical dimensions of the experiment.  相似文献   

19.
Hump-shaped distortion of motor nerve response, resembling spontaneous or single quanta in amplitude and time course were, observed at a temperature of 20°C, produced by stimulating this nerve during experiments on preparations of frog sartorius and cutaneous pectoral muscle involving focal extracellular recording. Having performed statistical analysis, the possibility could be excluded of this effect representing superposition of spontaneous over-evoked signals and the hypothesis could be put forward that it results from relatively unsynchronized release of separate quanta which go to make up a multiquantal response. This hypothesis would appear to be confirmed by clear-cut correlation between the distribution of synaptic delays in unitary response (when quantal content is low) and those observed in asynchronous response (when quantal content is high). Polymodal type distribution of synaptic delay is shown to be common to both cases. It is deduced that both asynchronous response and the discrete nature of variations in synaptic delay are standard features in the mechanisms of transmitter release.I. M. Sechenov Institute of Evolutionary Physiology and Biochemistry, Academy of Sciences of the USSR, Leningrad. Translated from Neirofiziologiya, Vol. 18, No. 3, pp. 346–354, May–June, 1986.  相似文献   

20.
A constitutive relation proposed by Shoemaker (Ph.D. dissertation, 1984) to model the mechanical behavior of membraneous or two-dimensional soft tissues is described. Experiments by Schneider (Ph.D. dissertation, 1982) on human skin and Lee et al. (Am. J. Physiol., 249, H222-H230, 1985) on canine pericardium, and the application of the constitutive model to biaxial stress-strain data from these experiments, are discussed. Some experimental data and predictions of the model obtained by curvefitting are presented for comparison. Values of material parameters are also presented. It is concluded that the constitutive model is well able to fit results of individual tests, and that its generality (judged by consistency of parameters from test to test of the same specimen), though not complete, does compare favorably with some other results presented in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号