首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Phage therapy is the use of bacteriophages as antimicrobial agents for the control of pathogenic and other problem bacteria. It has previously been argued that successful application of phage therapy requires a good understanding of the non-linear kinetics of phage–bacteria interactions. Here we combine experimental and modelling approaches to make a detailed examination of such kinetics for the important food-borne pathogen Campylobacter jejuni and a suitable virulent phage in an in vitro system. Phage-insensitive populations of C. jejuni arise readily, and as far as we are aware this is the first phage therapy study to test, against in vitro data, models for phage–bacteria interactions incorporating phage-insensitive or resistant bacteria. We find that even an apparently simplistic model fits the data surprisingly well, and we confirm that the so-called inundation and proliferation thresholds are likely to be of considerable practical importance to phage therapy. We fit the model to time series data in order to estimate thresholds and rate constants directly. A comparison of the fit for each culture reveals density-dependent features of phage infectivity that are worthy of further investigation. Our results illustrate how insight from empirical studies can be greatly enhanced by the use of kinetic models: such combined studies of in vitro systems are likely to be an essential precursor to building a meaningful picture of the kinetic properties of in vivo phage therapy.  相似文献   

2.
1. The authors define a function with value 1 for the positive examples and 0 for the negative ones. They fit a continuous function but do not deal at all with the error margin of the fit, which is almost as large as the function values they compute. 2. The term "quality" for the value of the fitted function gives the impression that some biological significance is associated with values of the fitted function strictly between 0 and 1, but there is no justification for this kind of interpretation and finding the point where the fit achieves its maximum does not make sense. 3. By neglecting the error margin the authors try to optimize the fitted function using differences in the second, third, fourth, and even fifth decimal place which have no statistical significance. 4. Even if such a fit could profit from more data points, the authors should first prove that the region of interest has some kind of smoothness, that is, that a continuous fit makes any sense at all. 5. "Simulated molecular evolution" is a misnomer. We are dealing here with random search. Since the margin of error is so large, the fitted function does not provide statistically significant information about the points in search space where strings with cleavage sites could be found. This implies that the method is a highly unreliable stochastic search in the space of strings, even if the neural network is capable of learning some simple correlations. 6. Classical statistical methods are for these kind of problems with so few data points clearly superior to the neural networks used as a "black box" by the authors, which in the way they are structured provide a model with an error margin as large as the numbers being computed.7. And finally, even if someone would provide us with a function which separates strings with cleavage sites from strings without them perfectly, so-called simulated molecular evolution would not be better than random selection.Since a perfect fit would only produce exactly ones or zeros,starting a search in a region of space where all strings in the neighborhood get the value zero would not provide any kind of directional information for new iterations. We would just skip from one point to the other in a typical random walk manner.  相似文献   

3.
1. The normalization of biochemical data to weight them appropriately for parameter estimation is considered, with reference particularly to data from tracer kinetics and enzyme kinetics. If the data are in replicate, it is recommended that the sum of squared deviations for each experimental variable at each time or concentration point is divided by the local variance at that point. 2. If there is only one observation for each variable at each sampling point, normalization may still be required if the observations cover more than one order of magnitude, but there is no absolute criterion for judging the effect of the weighting that is produced. The goodness of fit that is produced by minimizing the weighted sum of squares of deviations must be judged subjectively. It is suggested that the goodness of fit may be regarded as satisfactory if the data points are distributed uniformly on either side of the fitted curve. A chi-square test may be used to decide whether the distribution is abnormal. The proportion of the residual variance associated with points on one or other side of the fitted curve may also be taken into account, because this gives an indication of the sensitivity of the residual variance to movement of the curve away from particular data points. These criteria for judging the effect of weighting are only valid if the model equation may reasonably be expected to apply to all the data points. 3. On this basis, normalizing by dividing the deviation for each data point by the experimental observation or by the equivalent value calculated by the model equation may both be shown to produce a consistent bias for numerically small observations, the former biasing the curve towards the smallest observations, the latter tending to produce a curve that is above the numerically smaller data points. It was found that dividing each deviation by the mean of observed and calculated variable appropriate to it produces a weighting that is fairly free from bias as judged by the criteria mentioned above. This normalization factor was tested on published data from both tracer kinetics and enzyme kinetics.  相似文献   

4.
In this work we analyzed the quaternary structure of FAD-dependent 3-ketosteroid dehydrogenase (AcmB) from Sterolibacterium denitrificans, the protein that in solution forms massive aggregates (>600 kDa). Using size-excursion chromatography (SEC), dynamic light scattering (DLS), native-PAGE and atomic force microscopy (AFM) we studied the nature of enzyme aggregation. Partial protein de-aggregation was facilitated by the presence of non-ionic detergent such as Tween 20 or by a high degree of protein dilution but not by addition of a reducing agent or an increase of ionic strength. De-aggregating influence of Tween 20 had no impact on either enzyme’s specific activity or FAD reconstitution to recombinant AcmB. The joint experimental (DLS, isoelectric focusing) and theoretical investigations demonstrated gradual shift of enzyme’s isoelectric point upon aggregation from 8.6 for a monomeric form to even 5.0. The AFM imaging on mica or highly oriented pyrolytic graphite (HOPG) surface enabled observation of individual protein monomers deposited from a highly diluted solution (0.2 μg/ml). Such approach revealed that native AcmB can indeed be monomeric. AFM imaging supported by theoretical random sequential adsorption (RSA) kinetics allowed estimation of distribution enzyme forms in the bulk solution: 5%, monomer, 11.4% dimer and 12% trimer. Finally, based on results of AFM as well as analysis of the surface of AcmB homology models we have observed that aggregation is most probably initiated by hydrophobic forces and then assisted by electrostatic attraction between negatively charged aggregates and positively charged monomers.  相似文献   

5.
6.
An attractive approach to improving cold flow properties of biodiesel is to transesterify fatty acid methyl esters with higher alcohols such as n-butanol or with branched alcohols such as isopropanol. In this study, the reaction kinetics of Amberlyst-15 catalyzed transesterification of methyl stearate, a model biodiesel compound, with n-butanol have been examined. After identifying conditions to minimize both internal and external mass transfer resistances, the effects of catalyst loading, temperature, and the mole ratio of n-butanol to methyl stearate in the transesterification reaction were investigated. Experimental data were fit to a pseudo-homogeneous, activity-based kinetic model with inclusion of etherification reactions to appropriately characterize the transesterification system.  相似文献   

7.
Abstract

When using discontinuous assay of reactions, initial rates are often estimated from a limited number of time points. There has been no detailed study of how best to do this. In this work, time courses were simulated by different theoretical equations (including strong product inhibition, first order, Michaelis–Menten and truly linear), but with random error addition to each data point. Various methods were tested to fit an initial rate to the data, and the result compared with the known “true” value. Fitting a simple quadratic generally gives initial rates as accurate as any other curve, and is better than a linear fit if there are about 8 or more time points. For fewer points a linear fit gives less variable and often more accurate rates. The absolute contribution to data point error has a major impact on rate accuracy, and often dominates that due to curvature, so that sampling to at least 10% conversion is preferred. The accuracy of a linear fit can be improved by methods that reject some later points based on curvature tests. Awareness of these effects can help avoid rate inaccuracies of 10% or more due to poor methods of data analysis.  相似文献   

8.
An important element of protein folding theory has been the identification of equilibrium parameters that might uniquely distinguish rapidly folding polypeptide sequences from those that fold slowly. One such parameter, termed sigma, is a dimensionless, equilibrium measure of the coincidence of chain compaction and folding that is predicted to be an important determinant of relative folding kinetics. To test this prediction and improve our understanding of the putative relationship between nonspecific compaction of the unfolded state and protein folding kinetics, we have used small-angle X-ray scattering and circular dichroism spectroscopy to measure the sigma of five well-characterized proteins. Consistent with theoretical predictions, we find that near-perfect coincidence of the unfolded state contraction and folding (sigma approximately 0) is associated with the rapid kinetics of these naturally occurring proteins. We do not, however, observe any significant correlation between sigma and either the relative folding rates of these proteins or the presence or absence of well-populated kinetic intermediates. Thus, while sigma approximately 0 may be a necessary condition to ensure rapid folding, differences in sigma do not account for the wide range of rates and mechanisms with which naturally occurring proteins fold.  相似文献   

9.
Improvement in protein thermostability was often found to be associated with increase in its proteolytic resistance as revealed by comparative studies of homologous proteins from extremophiles or mutational studies. Structural elements of protein responsible for this association are not firmly established although loops are implicated indirectly due to their structural role in protein stability. To get a better insight, a detailed study of protein wide mutants and their influence on stability and proteolytic resistance would be helpful. To generate such a data set, a model protein, Bacillus subtilis lipase was subjected to loop scanning site-saturation mutagenesis on 86 positions spanning all loops including termini. Upon screening of ∼16,000 clones, 17 single mutants with improved thermostability were identified with increment in apparent melting temperature (Tmapp) by 1–6°C resulting in an increase in free energy of unfolding (ΔGunf) by 0.04–1.16 kcal/mol. Proteolytic resistance of all single mutants upon incubation with nonspecific protease, Subtilisin A, was determined. Upon comparison, post-proteolysis residual activities as well as kinetics of proteolysis of mutants showed excellent correlation with ΔGunf, (r > 0.9), suggesting that proteolysis was strongly correlated with the global stability of this protein. This significant correlation in this set, with least possible sequence changes (single aa substitution), while covering >60% of protein surface strongly argues for the covariance of these two variables. Compared to studies from extremophiles, with large sequence heterogeneity, the observed correlation in such a narrow sequence space (ΔΔGunf = 1.57 kcal−1) justifies the robustness of this relation.  相似文献   

10.

Background

Social support is frequently linked to positive parenting behavior. Similarly, studies increasingly show a link between neighborhood residential environment and positive parenting behavior. However, less is known about how the residential environment influences parental social support. To address this gap, we examine the relationship between neighborhood concentrated disadvantage and collective efficacy and the level and change in parental caregiver perceptions of non-familial social support.

Methodology/Principal Findings

The data for this study came from three data sources, the Project on Human Development in Chicago Neighborhoods (PHDCN) Study''s Longitudinal Cohort Survey of caregivers and their offspring, a Community Survey of adult residents in these same neighborhoods and the 1990 Census. Social support is measured at Wave 1 and Wave 3 and neighborhood characteristics are measured at Wave 1. Multilevel linear regression models are fit. The results show that neighborhood collective efficacy is a significant (ß = .04; SE = .02; p = .03), predictor of the positive change in perceived social support over a 7 year period, however, not of the level of social support, adjusting for key compositional variables and neighborhood concentrated disadvantage. In contrast concentrated neighborhood disadvantage is not a significant predictor of either the level or change in social support.

Conclusion

Our finding suggests that neighborhood collective efficacy may be important for inducing the perception of support from friends in parental caregivers over time.  相似文献   

11.
The pattern of polymorphism in Arabidopsis thaliana   总被引:1,自引:0,他引:1       下载免费PDF全文
We resequenced 876 short fragments in a sample of 96 individuals of Arabidopsis thaliana that included stock center accessions as well as a hierarchical sample from natural populations. Although A. thaliana is a selfing weed, the pattern of polymorphism in general agrees with what is expected for a widely distributed, sexually reproducing species. Linkage disequilibrium decays rapidly, within 50 kb. Variation is shared worldwide, although population structure and isolation by distance are evident. The data fail to fit standard neutral models in several ways. There is a genome-wide excess of rare alleles, at least partially due to selection. There is too much variation between genomic regions in the level of polymorphism. The local level of polymorphism is negatively correlated with gene density and positively correlated with segmental duplications. Because the data do not fit theoretical null distributions, attempts to infer natural selection from polymorphism data will require genome-wide surveys of polymorphism in order to identify anomalous regions. Despite this, our data support the utility of A. thaliana as a model for evolutionary functional genomics.  相似文献   

12.
Model based methods for genetic clustering of individuals, such as those implemented in structure or ADMIXTURE, allow the user to infer individual ancestries and study population structure. The underlying model makes several assumptions about the demographic history that shaped the analysed genetic data. One assumption is that all individuals are a result of K homogeneous ancestral populations that are all well represented in the data, while another assumption is that no drift happened after the admixture event. The histories of many real world populations do not conform to that model, and in that case taking the inferred admixture proportions at face value might be misleading. We propose a method to evaluate the fit of admixture models based on estimating the correlation of the residual difference between the true genotypes and the genotypes predicted by the model. When the model assumptions are not violated, the residuals from a pair of individuals are not correlated. In the case of a bad fitting admixture model, individuals with similar demographic histories have a positive correlation of their residuals. Using simulated and real data, we show how the method is able to detect a bad fit of inferred admixture proportions due to using an insufficient number of clusters K or to demographic histories that deviate significantly from the admixture model assumptions, such as admixture from ghost populations, drift after admixture events and nondiscrete ancestral populations. We have implemented the method as an open source software that can be applied to both unphased genotypes and low depth sequencing data.  相似文献   

13.
A “parallel plate” model describing the electrostatic potential energy of protein-protein interactions is presented that provides an analytical representation of the effect of ionic strength on a bimolecular rate constant. The model takes into account the asymmetric distribution of charge on the surface of the protein and localized charges at the site of electron transfer that are modeled as elements of a parallel plate condenser. Both monopolar and dipolar interactions are included. Examples of simple (monophasic) and complex (biphasic) ionic strength dependencies obtained from experiments with several electron transfer protein systems are presented, all of which can be accommodated by the model. The simple cases do not require the use of both monopolar and dipolar terms (i.e., they can be fit well by either alone). The biphasic dependencies can be fit only by using dipolar and monopolar terms of opposite sign, which is physically unreasonable for the molecules considered. Alternatively, the high ionic strength portion of the complex dependencies can be fit using either the monopolar term alone or the complete equation; this assumes a model in which such behavior is a consequence of electron transfer mechanisms involving changes in orientation or site of reaction as the ionic strength is varied. Based on these analyses, we conclude that the principal applications of the model presented here are to provide information about the structural properties of intermediate electron transfer complexes and to quantify comparisons between related proteins or site-specific mutants. We also conclude that the relative contributions of monopolar and dipolar effects to protein electron transfer kinetics cannot be evaluated from experimental data by present approximations.  相似文献   

14.
15.
Fluorescence correlation spectroscopy (FCS) is a noninvasive technique that probes the diffusion dynamics of proteins down to single-molecule sensitivity in living cells. Critical mechanistic insight is often drawn from FCS experiments by fitting the resulting time-intensity correlation function, G(t), to known diffusion models. When simple models fail, the complex diffusion dynamics of proteins within heterogeneous cellular environments can be fit to anomalous diffusion models with adjustable anomalous exponents. Here, we take a different approach. We use the maximum entropy method to show—first using synthetic data—that a model for proteins diffusing while stochastically binding/unbinding to various affinity sites in living cells gives rise to a G(t) that could otherwise be equally well fit using anomalous diffusion models. We explain the mechanistic insight derived from our method. In particular, using real FCS data, we describe how the effects of cell crowding and binding to affinity sites manifest themselves in the behavior of G(t). Our focus is on the diffusive behavior of an engineered protein in 1) the heterochromatin region of the cell’s nucleus as well as 2) in the cell’s cytoplasm and 3) in solution. The protein consists of the basic region-leucine zipper (BZip) domain of the CCAAT/enhancer-binding protein (C/EBP) fused to fluorescent proteins.  相似文献   

16.
Chemical inactivation of microorganisms is a common process widely employed in many fields such as in treatment of water, preservation in food industry and antimicrobial treatments in healthcare. For economy of applications and efficiency of treatment establishment the minimum dosage of breakpoint in the chemical application becomes essential. Even though experimental investigations have been extensive, theoretical understanding of such processes are demanding. Commonly employed theoretical analyses for the inactivation of microorganisms and depletion of chemicals include kinetics expressing the rates of depletion of chemical and microorganisms. The terms chemical demand (x) and specific disinfectant demand (α) are often used in theoretical modeling of inactivation. The value of specific disinfectant demand (α) has always been assumed to be a constant in these models. Intracellular concentration built up within the cells of the microorganisms during inactivation could lead to possible weakening effects of microorganisms thereby requiring lower doses as disinfection proceeds makes the assumption of constant α inaccurate. Model equations are formulated based on these observations co-relating the parameters α and x with a progressive inactivation (N/N0). The chemical concentration (C) is also presented in terms of the inactivation time (t) and the survival ratio (N/N0) for given pH and temperature conditions. The model is examined using experimentally verified Ct data of Giardia Cysts/chlorine system. The respective values of x for different survival ratios were evaluated from the data using MatLab software. Proposed model correlating for the disinfectant demand (x) with the survival ratio (N/N0) fits satisfactorily with those evaluated from data. The rate constants for different pH and temperature conditions are evaluated which showed compatibility with the Arrhenius model. The dependence of frequency factors with pH indicated compatibility with accepted models. The Ct values regenerated with the kinetic data shows a very accurate fit with published data.  相似文献   

17.
Pig kidney diamine oxidase (DAO) and other semicarbazide-sensitive amine oxidases (SSAO) show clear substrate-inhibition kinetics and a reaction-scheme mechanism based on two substrate binding sites. We evaluated several reaction scheme mechanisms with a non-linear regression program (NCSS), estimating R2, the constants of the equations and their standard errors and we determined the deviation of experimental data from theoretical equations. The best fit was obtained with a “dead end” mechanism with two binding sites. Based on this scheme, other schemes for a two-substrate reaction and for mechanisms of inhibition were constructed. These reaction schemes, even at low substrate concentration, fitted experimental data better than Michaelis-Menten kinetics, and provided information on the mechanisms of action of inhibitors. The presence of two substrate-binding sites on pig kidney DAO was confirmed by all experimental data.  相似文献   

18.
A theoretical analysis of several protein denaturation models (Lumry-Eyring models) that include a rate-limited step leading to an irreversibly denatured state of the protein (the final state) has been carried out. The differential scanning calorimetry transitions predicted for these models can be broadly classified into four groups: situations A, B, C, and C′. (A) The transition is calorimetrically irreversible but the rate-limited, irreversible step takes place with significant rate only at temperatures slightly above those corresponding to the transition. Equilibrium thermodynamics analysis is permissible. (B) The transition is distorted by the occurrence of the rate-limited step; nevertheless, it contains thermodynamic information about the reversible unfolding of the protein, which could be obtained upon the appropriate data treatment. (C) The heat absorption is entirely determined by the kinetics of formation of the final state and no thermodynamic information can be extracted from the calorimetric transition; the rate-determining step is the irreversible process itself. (C′) same as C, but, in this case, the rate-determining step is a previous step in the unfolding pathway. It is shown that ligand and protein concentration effects on transitions corresponding to situation C (strongly rate-limited transitions) are similar to those predicted by equilibrium thermodynamics for simple reversible unfolding models. It has been widely held in recent literature that experimentally observed ligand and protein concentration effects support the applicability of equilibrium thermodynamics to irreversible protein denaturation. The theoretical analysis reported here disfavors this claim.  相似文献   

19.
A great need exists for prediction of antibody response for the generation of antibodies toward protein targets. Earlier studies have suggested that prediction methods based on hydrophilicity propensity scale, in which the degree of exposure of the amino acid in an aqueous solvent is calculated, has limited value. Here, we show a comparative analysis based on 12,634 affinity‐purified antibodies generated in a standardized manner against human recombinant protein fragments. The antibody response (yield) was measured and compared to theoretical predictions based on a large number (544) of published propensity scales. The results show that some of the scales have predictive power, although the overall Pearson correlation coefficient is relatively low (0.2) even for the best performing amino acid indices. Based on the current data set, a new propensity scale was calculated with a Pearson correlation coefficient of 0.25. The values correlated in some extent to earlier scales, including large penalty for hydrophobic and cysteine residues and high positive contribution from acidic residues, but with relatively low positive contribution from basic residues. The fraction of immunogens generating low antibody responses was reduced from 30% to around 10% if immunogens with a high propensity score (>0.48) were selected as compared to immunogens with lower scores (<0.29). The study demonstrates that a propensity scale might be useful for prediction of antibody response generated by immunization of recombinant protein fragments. The data set presented here can be used for further studies to design new prediction tools for the generation of antibodies to specific protein targets.  相似文献   

20.
Modelled as finite homogeneous Markov chains, probabilistic cellular automata with local transition probabilities in (0, 1) always posses a stationary distribution. This result alone is not very helpful when it comes to predicting the final configuration; one needs also a formula connecting the probabilities in the stationary distribution to some intrinsic feature of the lattice configuration. Previous results on the asynchronous cellular automata have showed that such feature really exists. It is the number of zero-one borders within the automaton''s binary configuration. An exponential formula in the number of zero-one borders has been proved for the 1-D, 2-D and 3-D asynchronous automata with neighborhood three, five and seven, respectively. We perform computer experiments on a synchronous cellular automaton to check whether the empirical distribution obeys also that theoretical formula. The numerical results indicate a perfect fit for neighbourhood three and five, which opens the way for a rigorous proof of the formula in this new, synchronous case.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号