首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The 2015 epidemic of Middle East respiratory syndrome (MERS) in the Republic of Korea has been the largest outbreak outside Middle East. This epidemic had caused 185 laboratory-confirmed cases and 36 deaths in the Republic of Korea until September 2, 2015, which attracted public’s attention. Based on the detailed data of patients released by World Health Organization (WHO) and actual propagation of the epidemic, we construct two dynamical models to simulate the propagation processes from May 20 to June 8 and from June 9 to July 10, 2015, respectively and find that the basic reproduction number R 0 reaches up to 4.422. The numerical analysis shows that the reasons of the outbreak spread quickly are lack of self-protection sense and targeted control measures. Through partial correction analysis, the parameters β 1 and γ have strong correlations with R 0, i.e., the infectivity and proportion of the asymptomatic infected cases have much influence on the spread of disease. By sensitivity analysis, strengthening self-protection ability of susceptible and quickly isolating or monitoring close contacts are effective measures to control the disease.  相似文献   

2.
As infectious disease surveillance systems expand to include digital, crowd-sourced, and social network data, public health agencies are gaining unprecedented access to high-resolution data and have an opportunity to selectively monitor informative individuals. Contact networks, which are the webs of interaction through which diseases spread, determine whether and when individuals become infected, and thus who might serve as early and accurate surveillance sensors. Here, we evaluate three strategies for selecting sensors—sampling the most connected, random, and friends of random individuals—in three complex social networks—a simple scale-free network, an empirical Venezuelan college student network, and an empirical Montreal wireless hotspot usage network. Across five different surveillance goals—early and accurate detection of epidemic emergence and peak, and general situational awareness—we find that the optimal choice of sensors depends on the public health goal, the underlying network and the reproduction number of the disease (R0). For diseases with a low R0, the most connected individuals provide the earliest and most accurate information about both the onset and peak of an outbreak. However, identifying network hubs is often impractical, and they can be misleading if monitored for general situational awareness, if the underlying network has significant community structure, or if R0 is high or unknown. Taking a theoretical approach, we also derive the optimal surveillance system for early outbreak detection but find that real-world identification of such sensors would be nearly impossible. By contrast, the friends-of-random strategy offers a more practical and robust alternative. It can be readily implemented without prior knowledge of the network, and by identifying sensors with higher than average, but not the highest, epidemiological risk, it provides reasonably early and accurate information.  相似文献   

3.
When a pathogen is rare in a host population, there is a chance that it will die out because of stochastic effects instead of causing a major epidemic. Yet no criteria exist to determine when the pathogen increases to a risky level, from which it has a large chance of dying out, to when a major outbreak is almost certain. We introduce such an outbreak threshold (T0), and find that for large and homogeneous host populations, in which the pathogen has a reproductive ratio R0, on the order of 1/Log(R0) infected individuals are needed to prevent stochastic fade-out during the early stages of an epidemic. We also show how this threshold scales with higher heterogeneity and R0 in the host population. These results have implications for controlling emerging and re-emerging pathogens.With the constant risk of pathogens emerging [1], such as Severe Acute Respiratory Syndrome (SARS) or avian influenza virus in humans, foot-and-mouth disease virus in cattle in the United Kingdom [2], or various plant pathogens [3], it is imperative to understand how novel strains gain their initial foothold at the onset of an epidemic. Despite this importance, it has seldom been addressed how many infected individuals are needed to declare that an outbreak is occurring: that is, when the pathogen can go extinct due to stochastic effects, to when it infects a high enough number of hosts such that the outbreak size increases in a deterministic manner (Figure 1A). Generally, the presence of a single infected individual is not sufficient to be classified as an outbreak, so how many infected individuals need to be present to cause this deterministic increase? Understanding at what point this change arises is key in preventing and controlling nascent outbreaks as they are detected, as well as determining the best course of action for prevention or treatment.Open in a separate windowFigure 1The outbreak threshold in homogeneous and heterogeneous populations.(A) A schematic of pathogen emergence. This graph shows the early stages of several strains of an epidemic, where R0 = 1.25. The black line denotes the outbreak threshold (T 0 = 1/Log(R0) = 4.48). Blue thin lines show cases in which the pathogen goes extinct and does not exceed the threshold; the red thick line shows an epidemic that exceeds the threshold and persists for a long period of time. Simulations were based on the Gillespie algorithm [22]. (B) Outbreak threshold in a homogeneous (black thick line) or in a heterogeneous population, for increasing R0. The threshold was calculated following the method described by Lloyd-Smith et al. [11] and is shown for different values of k, the dispersion parameter of the offspring distribution, as obtained from data on previous epidemics [11]. If the threshold lies below one, this means that around only one infected individual is needed to give a high outbreak probability.The classic prediction for pathogen outbreak is that the pathogen''s reproductive ratio (R0), the number of secondary infections caused by an infected host in a susceptible population, has to exceed one [4], [5]. This criterion only strictly holds in deterministic (infinite population) models; in finite populations, there is still a chance that the infection will go extinct by chance rather than sustain itself [4][6]. Existing studies usually consider random drift affecting outbreaks in the context of estimating how large a host population needs to be to sustain an epidemic (the “Critical Community Size” [4], [7], [8]), calculating the outbreak probability in general [9][12], or ascertaining whether a sustained increase in cases over an area has occurred [13]. Here we discuss the fundamental question of how many infected individuals are needed to almost guarantee that a pathogen will cause an outbreak, as opposed to the population size needed to maintain an epidemic once it has appeared (Critical Community Size; see also Box 1). We find that only a small number of infected individuals are often needed to ensure that an epidemic will spread.

Box 1. Glossary of Key Terms

  • The Basic Reproductive Ratio (R0) is the number of secondary infections caused by a single infected individual, in a susceptible population. It is classically used to measure the rate of pathogen spread. In infinite-population models, a pathogen can emerge if R0>1. In a finite population, the pathogen can emerge from a single infection with probability 1-1/R0 if R0>1, otherwise extinction is certain.
  • The Critical Community Size (CCS) is defined as the total population size (of susceptible and infected individuals, or others) needed to sustain an outbreak once it has appeared. This idea was classically applied to determining what towns were most likely to maintain measles epidemics [7], so that there would always be some infected individuals present, unless intervention measures were taken.
  • The Outbreak Threshold (T0) has a similar definition to the CSS, but is instead for use at the onset of an outbreak, rather than once it has appeared. It measures how many infected individuals (not the total population size) are needed to ensure that an outbreak is very unlikely to go extinct by drift. Note that the outbreak can still go extinct in the long term, even if T0 is exceeded, if there are not enough susceptible individuals present to carry the infection afterwards.
We introduce the concept of the outbreak threshold (denoted T0), which we define as the number of infected individuals needed for the disease to spread in an approximately deterministic manner. T0 can be given by simple equations. Indeed, if the host population is homogeneous (that is, where there is no individual variability in reproductive rates) and large enough so that depletion of the pool of susceptible hosts is negligible, then the probability of pathogen extinction if I infected hosts are present is (1/R0)I ([6], details in Material S.1 in Text S1). By solving this equation in the limit of extinction probability going to zero, we find that on the order of 1/Log(R0) infected hosts are needed for an outbreak to be likely (black thick curve in Figure 1B), a result that reflects similar theory from population genetics [14][16]. Note that this result only holds in a finite population, as an outbreak in a fully susceptible infinite population is certain if R0>1 ([4], see also Material S.1 in Text S1).This basic result can be modified to consider more realistic or precise cases, and T0 can be scaled up if an exact outbreak risk is desired. For example, for the pathogen extinction probability to be less than 1%, there needs to be at least 5/Log(R0) infected individuals. More generally, the pathogen extinction probability is lower than a given threshold c if there are at least −Log(c)/Log(R0) infected individuals. Furthermore, if only a proportion p<1 of all infected individuals are detected, then the outbreak threshold order is p/Log(R0). Also, if there exists a time-lag τ between an infection occurring and its report, then the order of T0 is e−τ(β-δ)/Log(R0), where β is the infection transmission rate and 1/δ the mean duration of the infectious period (Material S.1 in Text S1). Finally, we can estimate how long it would take, on average, for the threshold to be reached and find that, if the depletion in susceptible hosts is negligible, this duration is on the order of 1/(β-δ) (Material S.1 in Text S1).So far we have only considered homogenous outbreaks, where on average each individual has the same pathogen transmission rate. In reality, there will be a large variance among individual transmission rates, especially if “super-spreaders” are present [17]. This population heterogeneity can either be deterministic, due to differences in immune history among hosts or differences in host behavior, or stochastic, due to sudden environmental or social changes. Spatial structure can also act as a form of heterogeneity, if each region or infected individual is subject to different transmission rates, or degree of contact with other individuals [18]. In such heterogeneous host populations, the number of secondary cases an infected individual engenders is jointly captured by R0 and a dispersion parameter k (see Box 2). This dispersion parameter controls the degree of variation in individual transmission rates, while fixing the average R0. The consequence of this model is that the majority of infected hosts tend to cause few secondary infections, while the minority behave as super-spreaders, causing many secondary infections. Host population heterogeneity (obtained with lower values of k) increases the probability that an outbreak will go extinct, as the pathogen can only really spread via one of the dwindling super-spreading individuals. In this heterogeneous case, we can find accurate values of T0 numerically. As shown in Figure 1B, if R0 is close to 1, host heterogeneity (k) does not really matter (T0 tends to be high). However, if the pathogen has a high R0 and thus spreads well, then host heterogeneity strongly affects T0. Additionally, we find that the heterogeneous threshold simply scales as a function of k and R02 (see Box 2). As an example, if k = 0.16, as estimated for SARS infections [11], the number of infected individuals needed to guarantee an outbreak increases 4-fold compared to a homogeneous population (Material S.3 in Text S1).

Box 2. Heterogeneous Outbreak Threshold

In a heterogeneous host population (see the main text for the bases of this heterogeneity), it has been shown that the number of secondary infections generated per infected individual can be well described by a negative binomial distribution with mean R0 and dispersion parameter k [11]. The dispersion parameter determines the level of variation in the number of secondary infections: if k = 1, we have a homogeneous outbreak, but heterogeneity increases as k drops below 1; that is, it enlarges the proportion of infected individuals that are either “super-spreaders” or “dead-ends” (those that do not transmit the pathogen). Lloyd-Smith et al. [11] showed how to estimate R0 and k from previous epidemics through applying a maximum-likelihood model to individual transmission data.Although in this case it is not possible to find a strict analytical form for the outbreak threshold, progress can be made if we measure the ratio of the heterogeneous and homogeneous thresholds. This function yields values that are independent of a strict cutoff probability (Material S.3 in Text S1). By investigating this ratio, we first found that for a fixed R0, a function of order 1/k fitted the numerical solutions very well. By measuring these curves for different R0 values, we further found that a function of order 1/R0 2 provided a good fit to the coefficients. By fitting a function of order 1/kR0 2 to the numerical data using least-squares regression in Mathematica 8.0 [19], we obtained the following adjusted form for the outbreak threshold T0 in a heterogeneous population:(1)As in the homogeneous case, T0 only provides us with an order of magnitude and it can be multiplied by −Log(c) to find the number of infected hosts required for there to be a probability of outbreak equal to 1-c. A sensitivity analysis shows that Equation 1 tends to be more strongly affected by changes in R0 than in k (Material S.3 in Text S1).The outbreak threshold T0 of an epidemic, which we define as the number of infected hosts above which there is very likely to be a major outbreak, can be estimated using simple formulae. Currently, to declare that an outbreak has occurred, studies choose an arbitrary low or high threshold depending, for instance, on whether they are monitoring disease outbreaks or modeling probabilities of emergence. We show that the outbreak threshold can be defined without resorting to an arbitrary cutoff. Of course, the generality of this definition has a cost, which is that the corresponding value of T0 is only an order of magnitude. Modifications are needed to set a specific cutoff value or to capture host heterogeneity in transmission or incomplete sampling.These results are valid if there are enough susceptible individuals present to maintain an epidemic in the initial stages, as assumed in most studies on emergence [6], [11][13], otherwise the pathogen may die out before the outbreak threshold is reached (Box 3 and Material S.2 in Text S1). Yet the key message generally holds that while the number of infections lies below the threshold, there is a strong chance that the pathogen will vanish without causing a major outbreak. From a biological viewpoint, unless R0 is close to one, these thresholds tend to be small (on the order of 5 to 20 individuals). This contrasts with estimates of the Critical Community Size, which tend to lie in the hundreds of thousands of susceptible individuals [3], [7], [8]. Therefore, while only a small infected population is needed to trigger a full-scale epidemic, a much larger pool of individuals are required to maintain an epidemic, once it appears, and prevent it from fading out. This makes sense, since there tends to be more susceptible hosts early on in the outbreak than late on.

Box 3. Effect of Limiting Host Population Size

The basic result for the homogeneous population, T0∼1/Log(R0), assumes that during the time to pathogen outbreak, there are always enough susceptible individuals available to transmit to, so R0 remains approximately constant during emergence. This assumption can be violated if R0 is close to 1, or if the population size is small. More precisely, if the maximum outbreak size in a Susceptible-Infected-Recovered (SIR) epidemic, which is given byis less than 1/Log(R0), then the threshold cannot be reached. Since this maximum is dependent on the population size, outbreaks in smaller populations are less likely to reach the outbreak threshold. For example, if N = 10,000 then R0 needs to exceed 1.06 for 1/Log(R0) to be reached; this increases to 1.34 if N decreases to 100. Further details are in Material S.2 in Text S1.Estimates of R0 and k from previous outbreaks can be used to infer the approximate size of this threshold, to determine whether a handful or hundreds of infected individuals are needed for an outbreak to establish itself. Box 4 outlines two case studies (smallpox in England and SARS in Singapore), estimates of T0 for these, and how knowledge of the threshold could have aided their control. These examples highlight how the simplicity and rigorousness of the definition of T0 opens a wide range of applications, as it can be readily applied to specific situations in order to determine the most adequate policies to prevent pathogen outbreaks.

Box 4. Two Case Studies: Smallpox in England and SARS in Singapore

A smallpox outbreak (Variola minor) was initiated in Birmingham, United Kingdom in 1966 due to laboratory release. We calculate a threshold such that the chance of extinction is less than 0.1%, which means that T0 is equal to 7 times Equation 1. With an estimated R0 of 1.6 and dispersion parameter k = 0.65 [11], T0 is approximately equal to 9 infections. The transmission chain for this outbreak is now well-known [20]. Due to prior eradication of smallpox in the United Kingdom, the pathogen was not recognised until around the 45th case was detected, by which point a full-scale epidemic was underway. A second laboratory outbreak arose in 1978, but the initial case (as well as a single secondary case) was quickly isolated, preventing a larger spread of the pathogen. Given the fairly low T0 for the previous epidemic, early containment was probably essential in preventing a larger outbreak.The SARS outbreak in Singapore in 2003 is an example of an outbreak with known super-spreaders [21], with an estimated initial R0 of 1.63 and a low k of 0.16 [11]. T0 is estimated to be around 27 infections. The first cases were observed in late February, with patients being admitted for pneumonia. Strict control measures were invoked from March 22nd onwards, including home quarantining of those exposed to SARS patients and closing down of a market where a SARS outbreak was observed. By this date, 57 cases were detected, although it is unclear how many of those cases were still ongoing on the date. This point is important, as it is the infected population size that determines T0.Overall, very early measures were necessary to successfully prevent a smallpox outbreak due to its rapid spread. In theory, it should have been “easier” to contain the SARS outbreak, as its threshold is three times greater than that for smallpox due to higher host heterogeneity (k). However, the first reported infected individual was a super-spreader, who infected at least 21 others. This reflects that in heterogeneous outbreaks, although the emergence probability is lower, the disease spread is faster (compared to homogeneous infections) once it does appear [11]. Quick containment of the outbreak was difficult to achieve since SARS was not immediately recognised, as well as the fact that the incubation period is around 5 days, by which point it had easily caused more secondary cases. However, in subsequent outbreaks super-spreaders might not be infected early on, allowing more time to contain the spread.For newly-arising outbreaks, T0 can be applied in several ways. If the epidemic initially spreads slowly, then R0 and T0 can be measured directly. Alternatively, estimates of T0 can be calculated from previous outbreaks, as outlined above. In both cases, knowing what infected population size is needed to guarantee emergence can help to assess how critical a situation is. More generally, due to the difficulty in detecting real-world outbreaks that go extinct very quickly, experimental methods might be useful in determining to what extent different levels of T0 capture the likelihood of full epidemic emergence.  相似文献   

4.
The basic reproduction number, R 0, is probably the most important quantity in epidemiology. It is used to measure the transmission potential during the initial phase of an epidemic. In this paper, we are specifically concerned with the quantification of the spread of a disease modeled by a Markov chain. Due to the occurrence of repeated contacts taking place between a typical infective individual and other individuals already infected before, R 0 overestimates the average number of secondary infections. We present two alternative measures, namely, the exact reproduction number, R e0, and the population transmission number, R p , that overcome this difficulty and provide valuable insight. The applicability of R e0 and R p to control of disease spread is also examined.  相似文献   

5.
Fast photosignals (FPS) with R1 and R2 components were measured in retinas of cattle, rat, and frog within a temperature range of 0° to 60°C. Except for temperatures near 0°C the signal rise of the R1 component was determined by the duration of the exciting flash. The kinetics of the R2 component and the meta transition of rhodopsin in the cattle and rat retina were compared. For the analysis of the FPS it is presupposed that the signal is produced by light-induced charges on the outer segment envelope membrane that spread onto the whole plasma membrane of the photoreceptor cell. To a good approximation, this mechanism can be described by a model circuit with two distinct capacitors. In this model, the charging capacitance of the pigmented outer segment envelope membrane and the capacitance of the receptor's nonpigmented plasma membrane are connected via the extra- and intracellular electrolyte resistances. The active charging is explained by two independent processes, both with exponential rise (R1 and R2), that are due to charge displacements within the pigmented envelope membrane. The time constant τ2 of the R2 membrane charging process shows a strong temperature dependence that of the charge redistribution, τr, a weak one. In frog and cattle retinas the active charging is much slower within a large temperature range than the passive charge redistribution. From the two-capacitor model it follows for τr « τ2 that the rise of the R2 component is determined by τr, whereas the decay is given by τ2. For the rat retina, however, τ2 approaches τr at physiological temperatures and becomes <τr above 45°C. In this temperature range where τ2 ≈ τr, both processes affect rise and decay of the photosignal. The absolute values of τr are in good accordance with the known electric parameters of the photoreceptors. At least in the cattle retina, the time constant τ2 is identical with that of the slow component of the meta II formation. The strong temperature dependence of the meta transition time gives rise to the marked decrease of the R2 amplitude with falling temperature. As the R1 rise could not be fully time resolved the signal analysis does not yield the time constant τ1 of the R1 generating process. It could be established, however, within the whole temperature range that the decay of the R1 component is determined by τr. Using an extended model that allows for membrane leakage, we show that in normal ringer solution the membrane time constant does not influence the signal time-course and amplitude.  相似文献   

6.
7.
Despite the important role of the unfolded states in protein stability, folding, and aggregation, they remain poorly understood due to the lack of residue-specific experimental data. Here, we explore features of the unfolded state of the NTL9 protein by applying all-atom replica-exchange simulations to the two fragment peptides NTL9(1–22) and NTL9(6–17). We found that while NTL9(6–17) is unstructured, NTL9(1–22) transiently folds as various β-hairpins, a fraction of which contain a native β-sheet. Interestingly, despite a large number of charged residues, the formation of backbone hydrogen bonds is concomitant with hydrophobic but not electrostatic contacts. Although the fragment peptides lack a proposed specific contact between Asp8 and Lys12, the individually weak, nonspecific interactions with lysines together stabilize the charged Asp8, leading to a pKa shift of nearly 0.5 units, in agreement with the NMR data. Taken together, our data suggest that the unfolded state of NTL9 likely contains a β-hairpin in segment 1–22 with sequence-distant hydrophobic contacts, thus lending support to a long-standing hypothesis that the unfolded states of proteins exhibit native-like topology with hydrophobic clusters.  相似文献   

8.
An analytical solution is obtained for the three-dimensional spatial distribution of potential inside a flat cell, such as the layer of horizontal cells, as a function of its geometry and resistivity characteristics. It was found that, within a very large range of parameter values, the potential is given by [Formula: see text] where r = ρ/ρ0, = z0, ρ = (Ri/Rm)·ρ0, δ = h0; K is a constant; J is the assumed synaptic current; ρ, z are cylindrical coordinates; ρ0 is the radius of the synaptic area of excitation; h is the cell thickness; and Ri, Rm are the intracellular and membrane resistivities, respectively. Formula A closely fits data for the spatial decay of potential which were obtained from the catfish internal and external horizontal cells. It predicts a decay which is exponential down to about 40% of the maximum potential but is much slower than exponential below that level, a characteristic also exhibited by the data. Such a feature in the decay mode allows signal integration over the large retinal areas which have been observed experimentally both at the horizontal and ganglion cell stages. The behavior of the potential distribution as a function of the flat cell parameters is investigated, and it is found that for the range of the horizontal cell thicknesses (10-50 μ) the decay rate depends solely on the ratio Rm/Ri. Data obtained from both types of horizontal cells by varying the diameter of the stimulating spot and for three widely different intensity levels were closely fitted by equation A. In the case of the external horizontal cell, the fit for different intensities was obtained by varying the ratio Rm/Ri; in the case of the internal horizontal cell it was found necessary, in order to fit the data for different intensities, to vary the assumed synaptic current J.  相似文献   

9.
10.
Sankar Subramanian 《Genetics》2013,193(3):995-1002
Previous studies observed a higher ratio of divergences at nonsynonymous and synonymous sites (ω = dN/dS) in species with a small population size compared to that estimated for those with a large population size. Here we examined the theoretical relationship between ω, effective population size (Ne), and selection coefficient (s). Our analysis revealed that when purifying selection is high, ω of species with small Ne is much higher than that of species with large Ne. However the difference between the two ω reduces with the decline in selection pressure (s → 0). We examined this relationship using primate and rodent genes and found that the ω estimated for highly constrained genes of primates was up to 2.9 times higher than that obtained for their orthologous rodent genes. Conversely, for genes under weak purifying selection the ω of primates was only 17% higher than that of rodents. When tissue specificity was used as a proxy for selection pressure we found that the ω of broadly expressed genes of primates was up to 2.1-fold higher than that of their rodent counterparts and this difference was only 27% for tissue specific genes. Since most of the nonsynonymous mutations in constrained or broadly expressed genes are deleterious, fixation of these mutations is influenced by Ne. This results in a higher ω of these genes in primates compared to those from rodents. Conversely, the majority of nonsynonymous mutations in less-constrained or tissue-specific genes are neutral or nearly neutral and therefore fixation of them is largely independent of Ne, which leads to the similarity of ω in primates and rodents.  相似文献   

11.
The Escherichia coli clamp loader, γ complex (γ3δδ′λψ), catalyzes ATP-driven assembly of β clamps onto primer-template DNA (p/tDNA), enabling processive replication. The mechanism by which γ complex targets p/tDNA for clamp assembly is not resolved. According to previous studies, charged/polar amino acids inside the clamp loader chamber interact with the double-stranded (ds) portion of p/tDNA. We find that dsDNA, not ssDNA, can trigger a burst of ATP hydrolysis by γ complex and clamp assembly, but only at far higher concentrations than p/tDNA. Thus, contact between γ complex and dsDNA is necessary and sufficient, but not optimal, for the reaction, and additional contacts with p/tDNA likely facilitate its selection as the optimal substrate for clamp assembly. We investigated whether a conserved sequence—HRVW279QNRR—in δ subunit contributes to such interactions, since Tryptophan-279 specifically cross-links to the primer-template junction. Mutation of δ-W279 weakens γ complex binding to p/tDNA, hampering its ability to load clamps and promote proccessive DNA replication, and additional mutations in the sequence (δ-R277, δ-R283) worsen the interaction. These data reveal a novel location in the C-terminal domain of the E. coli clamp loader that contributes to DNA binding and helps define p/tDNA as the preferred substrate for the reaction.  相似文献   

12.
The conformational heterogeneity of the N-terminal domain of the ribosomal protein L9 (NTL91-39) in its folded state is investigated using isotope-edited two-dimensional infrared spectroscopy. Backbone carbonyls are isotope-labeled (13C=18O) at five selected positions (V3, V9, V9G13, G16, and G24) to provide a set of localized spectroscopic probes of the structure and solvent exposure at these positions. Structural interpretation of the amide I line shapes is enabled by spectral simulations carried out on structures extracted from a recent Markov state model. The V3 label spectrum indicates that the β-sheet contacts between strands I and II are well folded with minimal disorder. The V9 and V9G13 label spectra, which directly probe the hydrogen-bond contacts across the β-turn, show significant disorder, indicating that molecular dynamics simulations tend to overstabilize ideally folded β-turn structures in NTL91-39. In addition, G24-label spectra provide evidence for a partially disordered α-helix backbone that participates in hydrogen bonding with the surrounding water.  相似文献   

13.
We investigated the metabolic route by which a lignin tetramer-degrading mixed bacterial culture degraded two tetrameric lignin model compounds containing β—O—4 and 5—5 biphenyl structures. The α-hydroxyl groups in the propane chain of both phenolic and nonphenolic tetramers were first oxidized symmetrically in two successive steps to give monoketones and diketones. These ketone metabolites were decomposed through Cα(=O)—Cβ cleavage, forming trimeric carboxyl acids which were further metabolized through another Cα(=O)—Cβ cleavage. Dehydrodiveratric acid, which resulted from the cleavage of the carbon bonds of the nonphenol tetramer, was demethylated twice. Four metabolites of the phenolic tetramer were purified and identified. All of these were stable compounds in sterile mineral medium, but were readily degraded by lignin tetramer-degrading bacteria along the same pathway as the phenol tetramer. No monoaromatic metabolites accumulated. All metabolites were identified by mass and proton magnetic resonance spectrometry. The metabolic route by which the mixed bacterial culture degraded tetrameric lignin model compounds was different from the route of the main ligninase-catalyzed Cα—Cβ cleavage by Phanerochaete chrysosporium.  相似文献   

14.
Cross-Correlation Functions for a Neuronal Model   总被引:5,自引:1,他引:4       下载免费PDF全文
Cross-correlation functions, RXY(t,τ), are obtained for a neuron model which is characterized by constant threshold θ, by resetting to resting level after an output, and by membrane potential U(t) which results from linear summation of excitatory postsynaptic potentials h(t). The results show that: (1) Near time lag τ = 0, RXY(t,τ) = fU [θ-h(τ), t + τ] {h′(τ) + EU [u′(t + τ)]} for positive values of this quantity, where fU(u,t) is the probability density function of U(t) and EU [u′(t + τ)] is the mean value function of U′(t + τ). (2) Minima may appear in RXY(t,τ) for a neuron subjected only to excitation. (3) For large τ, RXY(t,τ) is given approximately by the convolution of the input autocorrelation function with the functional of point (1). (4) RXY(t,τ) is a biased estimator of the shape of h(t), generally over-estimating both its time to peak and its rise time.  相似文献   

15.
Many cardiac arrhythmias are caused by slowed conduction of action potentials, which in turn can be due to an abnormal increase of intracellular myocardial resistance. Intracellular resistivity is a linear sum of that offered by gap junctions between contiguous cells and the cytoplasm of the myocytes themselves. However, the relative contribution of the two components is unclear, especially in atrial myocardium, as there are no precise measurements of cytoplasmic resistivity, Rc. In this study, Rc was measured in atrial tissue using several methods: a dielectrophoresis technique with isolated cells and impedance measurements with both isolated cells and multicellular preparations. All methods yielded similar values for Rc, with a mean of 138 ± 5 Ω·cm at 23°C, and a Q10 value of 1.20. This value is about half that of total intracellular resistivity and thus will be a significant determinant of the actual value of action potential conduction velocity. The dielectrophoresis experiments demonstrated the importance of including divalent cations (Ca2+ and Mg2+) in the suspension medium, as their omission reduced cell integrity by lowering membrane resistivity and increasing cytoplasm resistivity. Accurate measurement of Rc is essential to develop quantitative computational models that determine the key factors contributing to the development of cardiac arrhythmias.  相似文献   

16.
The R21(TC) factor, obtained by transduction of the R10(TC.CM.SM.SA) factor with phage ε to group E Salmonella, is not transferable by the normal conjugal process. However, when R21(TC)+ transductants are infected with the F13 factor, the nontransferable R21(TC) factor acquires transmissibility by conjugation. R21(TC)+ conjugants of Escherichia coli K-12, to which only the R21(TC) factor was transmitted by cell-to-cell contact from an F′ R+ donor, were still unable to transfer their R21(TC) factor by conjugation. In crosses between Hfr and FE. coli K-12 strains containing R21(TC), the gene responsible for tetracycline resistance was located on the E. coli K-12 chromosome between lac and pro, near lac.  相似文献   

17.
Pressure volume curves for Alternanthera philoxeroides (Mart.) Griseb. (alligator weed) grown in 0 to 400 millimolar NaCl were used to determine water potential (Ψ), osmotic potential (ψs), turgor potential (ψp) and the bulk elastic modulus (ε) of shoots at different tissue water contents. Values of ψs decreased with increasing salinity and tissue Ψ was always lower than rhizosphere Ψ. The relationship between ψp and tissue water content changed because ε increased with salinity. As a result, salt-stressed plants had larger ranges of positive turgor but smaller ranges of tissue water content over which ψp was positive. To our knowledge, this is the first report of such a salinity effect on ε in higher plants. These increases in ε with salinity provided a mechanism by which a large difference between plant Ψ and rhizosphere Ψ, the driving force for water uptake, could be produced with relatively little water loss by the plant. A time-course study of response after salinization to 400 millimolar NaCl showed Ψ was constant within 1 day, ψs and ψp continued to change for 2 to 4 days, and ε continued to change for 4 to 12 days. Changes in ε modified the capacity of alligator weed to maintain a positive water balance and consideration of such changes in other species of higher plants should improve our understanding of salt stress.  相似文献   

18.

Background

Estimates of dengue transmission intensity remain ambiguous. Since the majority of infections are asymptomatic, surveillance systems substantially underestimate true rates of infection. With advances in the development of novel control measures, obtaining robust estimates of average dengue transmission intensity is key for assessing both the burden of disease from dengue and the likely impact of interventions.

Methodology/Principal Findings

The force of infection (λ) and corresponding basic reproduction numbers (R0) for dengue were estimated from non-serotype (IgG) and serotype-specific (PRNT) age-stratified seroprevalence surveys identified from the literature. The majority of R0 estimates ranged from 1–4. Assuming that two heterologous infections result in complete immunity produced up to two-fold higher estimates of R0 than when tertiary and quaternary infections were included. λ estimated from IgG data were comparable to the sum of serotype-specific forces of infection derived from PRNT data, particularly when inter-serotype interactions were allowed for.

Conclusions/Significance

Our analysis highlights the highly heterogeneous nature of dengue transmission. How underlying assumptions about serotype interactions and immunity affect the relationship between the force of infection and R0 will have implications for control planning. While PRNT data provides the maximum information, our study shows that even the much cheaper ELISA-based assays would provide comparable baseline estimates of overall transmission intensity which will be an important consideration in resource-constrained settings.  相似文献   

19.
Cell quotas of microcystin (QMCYST; femtomoles of MCYST per cell), protein, and chlorophyll a (Chl a), cell dry weight, and cell volume were measured over a range of growth rates in N-limited chemostat cultures of the toxic cyanobacterium Microcystis aeruginosa MASH 01-A19. There was a positive linear relationship between QMCYST and specific growth rate (μ), from which we propose a generalized model that enables QMCYST at any nutrient-limited growth rate to be predicted based on a single batch culture experiment. The model predicts QMCYST from μ, μmax (maximum specific growth rate), QMCYSTmax (maximum cell quota), and QMCYSTmin (minimum cell quota). Under the conditions examined in this study, we predict a QMCYSTmax of 0.129 fmol cell−1 at μmax and a QMCYSTmin of 0.050 fmol cell−1 at μ = 0. Net MCYST production rate (RMCYST) asymptotes to zero at μ = 0 and reaches a maximum of 0.155 fmol cell−1 day−1 at μmax. MCYST/dry weight ratio (milligrams per gram [dry weight]) increased linearly with μ, whereas the MCYST/protein ratio reached a maximum at intermediate μ. In contrast, the MCYST/Chl a ratio remained constant. Cell volume correlated negatively with μ, leading to an increase in intracellular MCYST concentration at high μ. Taken together, our results show that fast-growing cells of N-limited M. aeruginosa are smaller, are of lower mass, and have a higher intracellular MCYST quota and concentration than slow-growing cells. The data also highlight the importance of determining cell MCYST quotas, as potentially confusing interpretations can arise from determining MCYST content as a ratio to other cell components.  相似文献   

20.
The accurate partitioning of Firmicute plasmid pSM19035 at cell division depends on ATP binding and hydrolysis by homodimeric ATPase δ2 (ParA) and binding of ω2 (ParB) to its cognate parS DNA. The 1.83 Å resolution crystal structure of δ2 in a complex with non-hydrolyzable ATPγS reveals a unique ParA dimer assembly that permits nucleotide exchange without requiring dissociation into monomers. In vitro, δ2 had minimal ATPase activity in the absence of ω2 and parS DNA. However, stoichiometric amounts of ω2 and parS DNA stimulated the δ2 ATPase activity and mediated plasmid pairing, whereas at high (4:1) ω2 : δ2 ratios, stimulation of the ATPase activity was reduced and δ2 polymerized onto DNA. Stimulation of the δ2 ATPase activity and its polymerization on DNA required ability of ω2 to bind parS DNA and its N-terminus. In vivo experiments showed that δ2 alone associated with the nucleoid, and in the presence of ω2 and parS DNA, δ2 oscillated between the nucleoid and the cell poles and formed spiral-like structures. Our studies indicate that the molar ω2 : δ2 ratio regulates the polymerization properties of (δ•ATP•Mg2+)2 on and depolymerization from parS DNA, thereby controlling the temporal and spatial segregation of pSM19035 before cell division.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号