首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A model for the induction of transformation, mutation, and cell killing by radiations of intermediate to high linear energy transfer (LET) is presented. The mathematical formulation presupposes a constant probability per unit path length for damaging multiple subcellular targets by radiation of a fixed LET. The coupling between effects is accounted for through an explicit calculation of the probability that any specific combination of effects occurs in a given cell. This feature avoids the false assumption that cell killing and mutation (or transformation) are independent events. The resulting model then is applied to data on the in vitro survival, mutation, and transformation of cells by radiations of varying LET. A summary of estimated parameter values is provided and calculations of the effect of cellular flattening on transformation are presented.  相似文献   

2.
Germinal mosaicism and risk calculation in X-linked diseases.   总被引:4,自引:0,他引:4       下载免费PDF全文
Germinal mosaicism is a major problem in risk estimation for an X-linked disease. A mutation can happen anytime in germ cell development, and the proportion of germ cells bearing the mutated gene is twice the probability of recurrence of the mutation. This proportion could be either very low in late mutations or very high in germinal and somatic mosaicism. When this heterogeneity is taken into consideration, the distribution of the recurrence risk is conveniently represented as a set of discrete classes that may be derived either from models of gametogenesis or from empirical data. A computer program taking into account germinal mosaicism has been devised to calculate the probability of a possible carrier belonging to any of these classes, in order to settle the origin of the mutation of a given family. Germinal mosaicism increases the probability of inheriting the mutation, but this effect is always lowered by the possibility of heterogeneity. When the mother of a possible carrier is not herself a carrier, the risk of her daughter being a carrier is approximately halved, even under the assumption of a high recurrence risk from mosaicism.  相似文献   

3.
Schinazi RB 《Genetics》2006,174(1):545-547
We propose a simple stochastic model based on the two successive mutations hypothesis to compute cancer risks. Assume that only stem cells are susceptible to the first mutation and that there are a total of D stem cell divisions over the lifetime of the tissue with a first mutation probability mu(1) per division. Our model predicts that cancer risk will be low if m = mu(1)D is low even in the case of very advantageous mutations. Moreover, if mu(1)D is low the mutation probability of the second mutation is practically irrelevant to the cancer risk. These results are in contrast with existing models but in agreement with a conjecture of Cairns. In the case where m is large our model predicts that the cancer risk depends crucially on whether the first mutation is advantageous or not. A disadvantageous or neutral mutation makes the risk of cancer drop dramatically.  相似文献   

4.
Summary A method of estimating the number of nucleotide substitutions from amino acid sequence data is developed by using Dayhoff's mutation probability matrix. This method takes into account the effect of nonrandom amino acid substitutions and gives an estimate which is similar to the value obtained by Fitch's counting method, but larger than the estimate obtained under the assumption of random substitutions (Jukes and Cantor's formula). Computer simulations based on Dayhoff's mutation probability matrix have suggested that Jukes and Holmquist's method of estimating the number of nucleotide substitutions gives an overestimate when amino acid substitution is not random and the variance of the estimate is generally very large. It is also shown that when the number of nucleotide substitutions is small, this method tends to give an overestimate even when amino acid substitution is purely at random.  相似文献   

5.
Evolutionary studies commonly model single nucleotide substitutions and assume that they occur as independent draws from a unique probability distribution across the sequence studied. This assumption is violated for protein-coding sequences, and we consider modeling approaches where codon positions (CPs) are treated as separate categories of sites because within each category the assumption is more reasonable. Such "codon-position" models have been shown to explain the evolution of codon data better than homogenous models in previous studies. This paper examines the ways in which codon-position models outperform homogeneous models and characterizes the differences in estimates of model parameters across CPs. Using the PANDIT database of multiple species DNA sequence alignments, we quantify the differences in the evolutionary processes at the 3 CPs in a systematic and comprehensive manner, characterizing previously undescribed features of protein evolution. We relate our findings to the functional constraints imposed by the genetic code, protein function, and the types of mutation that cause synonymous and nonsynonymous codon changes. The results increase our understanding of selective constraints and could be incorporated into phylogenetic analyses or gene-finding techniques in the future. The methods used are extended to an overlapping reading frame data set, and we discover that overlapping reading frames do not necessarily cause more stringent evolutionary constraints.  相似文献   

6.
Based on the assumption that the numbers of mutations observed in an untreated and treated sample of individuals are binomial random variables, a method is presented to compute the probability of observing a specific number of mutations as a function of the sample sizes and the number of mutations in the untreated control sample. Knowledge of the true mutation frequencies is not required. The formalism is then used to compute critical sample sizes for testing hypotheses concerning mutation frequencies in the two populations.  相似文献   

7.
In previous papers (Theraulaz et al., 1995; Bonabeau et al., 1996) we suggested, following Hogeweg and Hesper (1983, 1985), that the formation of dominance orders in animal societies could result from a self-organizing process involving a double reinforcement mechanism: winners reinforce their probability of winning and losers reinforce their probability of losing. This assumption, and subsequent models relying on it, were based on empirical data on primitively eusocial wasps (Polistes dominulus). By reanalysing some of the experimental data that was previously thought to be irrelevant, we show that it is impossible to distinguish this assumption from a competing assumption based on preexisting differences among individuals. We propose experiments to help discriminate between the two assumptions and their corresponding models—the self-organization model and the correlational model. We urge other researchers to be cautious when interpreting their dominance data with the ’self-organization mindset’; in particular, ‘winner and loser effects’, which are often considered to give support to the self-organization assumption, are equally consistent with the correlational assumption.  相似文献   

8.
ERICH BÄCHLER  & FELIX LIECHTI 《Ibis》2007,149(4):693-700
Raw count data are often used to estimate bird population densities. However, such data do not consider detection probability. As an alternative, methods that model detection probability such as distance-sampling have been proposed. However, standard distance-sampling provides reliable estimates for absolute density only when the underlying assumptions are met. One of the most critical of these assumptions is that animals on a transect line or at an observation point have to be detected with certainty (the g (0) = 1 assumption). We radiotagged nine Orphean Warblers Sylvia hortensis and estimated their short-distance detection probability. Birds were radio-located in 264 cases in single bushes or trees. Their visual detection probability after a 5-min search was only 0.58 (sd = ±0.14, range = 0.38–0.80), although the observer knew the bird's location. Furthermore, we carried out a literature review to assess how the g (0) = 1 assumption is handled in practice. None of the 28 standard distance-sampling papers reviewed contained an estimation of g (0). In 57% of the papers, the g (0) = 1 assumption was not even mentioned. Nevertheless, none of the authors declared their estimates as being relative. Our empirical data show that the g (0) = 1 assumption would be severely violated for a foliage-gleaning bird species at a desert stopover site outside the breeding season. The literature review revealed that the testing of the g (0) = 1 assumption is largely ignored in practice. We strongly suggest that more attention should be paid to the testing of this key assumption, because results may not be reliable when it is violated. If it is not possible to test the g (0) = 1 assumption or g (0) is less than 1, alternative methods should be used. Another possibility is to estimate detection probability by the means of radiotagged individuals.  相似文献   

9.
Many environmental health and risk assessment techniques and models aim at estimating the fluctuations of selected biological endpoints through the time domain as a means of assessing changes in the environment or the probability of a particular measurement level occurring. In either case, estimates of the sample variance and mean of the sample variance are crucial to making appropriate statistical inferences. The commonly employed statistical techniques for estimating both measures presume the data were generated by a covariance stationary process. In such cases, the observations are treated as independently and identically distributed and classical statistical testing methods are applied. However, if the assumption of covariance stationarity is violated, the resulting sample variance and variance of the sample mean estimates are biased. The bias compromises statistical testing procedures by increasing the probability of detecting significance in tests of mean and variance differences. This can lead to inappropriate decisions being made about the severity of environmental damage. Accordingly, it is argued that data sets be examined for correlation in the time domain and appropriate adjustments be made to the required estimators before they are used in statistical hypothesis testing. Only then can credible and scientifically defensible decisions be made by environmental decision makers and regulators.  相似文献   

10.

In vitro experiments show that the cells possibly responsible for radiation-induced acute myeloid leukemia (rAML) exhibit low-dose hyper-radiosensitivity (HRS). In these cells, HRS is responsible for excess cell killing at low doses. Besides the endpoint of cell killing, HRS has also been shown to stimulate the low-dose formation of chromosomal aberrations such as deletions. Although HRS has been investigated extensively, little is known about the possible effect of HRS on low-dose cancer risk. In CBA mice, rAML can largely be explained in terms of a radiation-induced Sfpi1 deletion and a point mutation in the remaining Sfpi1 gene copy. The aim of this paper is to present and quantify possible mechanisms through which HRS may influence low-dose rAML incidence in CBA mice. To accomplish this, a mechanistic rAML CBA mouse model was developed to study HRS-dependent AML onset after low-dose photon irradiation. The rAML incidence was computed under the assumptions that target cells: (1) do not exhibit HRS; (2) HRS only stimulates cell killing; or (3) HRS stimulates cell killing and the formation of the Sfpi1 deletion. In absence of HRS (control), the rAML dose-response curve can be approximated with a linear-quadratic function of the absorbed dose. Compared to the control, the assumption that HRS stimulates cell killing lowered the rAML incidence, whereas increased incidence was observed at low doses if HRS additionally stimulates the induction of the Sfpi1 deletion. In conclusion, cellular HRS affects the number of surviving pre-leukemic cells with an Sfpi1 deletion which, depending on the HRS assumption, directly translates to a lower/higher probability of developing rAML. Low-dose HRS may affect cancer risk in general by altering the probability that certain mutations occur/persist.

  相似文献   

11.
Pang Z  Kuk AY 《Biometrics》2007,63(1):218-227
Exchangeable binary data are often collected in developmental toxicity and other studies, and a whole host of parametric distributions for fitting this kind of data have been proposed in the literature. While these distributions can be matched to have the same marginal probability and intra-cluster correlation, they can be quite different in terms of shape and higher-order quantities of interest such as the litter-level risk of having at least one malformed fetus. A sensible alternative is to fit a saturated model (Bowman and George, 1995, Journal of the American Statistical Association 90, 871-879) using the expectation-maximization (EM) algorithm proposed by Stefanescu and Turnbull (2003, Biometrics 59, 18-24). The assumption of compatibility of marginal distributions is often made to link up the distributions for different cluster sizes so that estimation can be based on the combined data. Stefanescu and Turnbull proposed a modified trend test to test this assumption. Their test, however, fails to take into account the variability of an estimated null expectation and as a result leads to inaccurate p-values. This drawback is rectified in this article. When the data are sparse, the probability function estimated using a saturated model can be very jagged and some kind of smoothing is needed. We extend the penalized likelihood method (Simonoff, 1983, Annals of Statistics 11, 208-218) to the present case of unequal cluster sizes and implement the method using an EM-type algorithm. In the presence of covariate, we propose a penalized kernel method that performs smoothing in both the covariate and response space. The proposed methods are illustrated using several data sets and the sampling and robustness properties of the resulting estimators are evaluated by simulations.  相似文献   

12.
Summary A hereditary disease with excess mortality such as haemophilia is maintained in the population by the occurrence of new cases, i.e. mutations. In haemophilia, mutations may arise in female or male ancestors of a new patient. The ratio of the mutation frequencies in males over females determines the prior risk of carriership of the mother of an isolated patient. An estimate of this prior risk is required for the application of Bayes' theorem to probability calculations in carriership testing. We have developed a method to estimate the sex ratio of the mutation frequencies; it does not depend on the assumption of genetic equilibrium, nor require an estimate of the reproductive fitness of haemophilia patients and carriers. Information from 462 patients with severe or moderately severe haemophilia A was gathered by postal questionnaires in a survey that included practically all Dutch haemophiliacs. Pedigree analysis was performed for the 189 patients of these 462, who were the first haemophiliacs in their family. By the maximum likelihood method, the ratio of the mutation frequencies in males and females was estimated at 2.1, with a 95% confidence interval of 0.7–6.7. In addition, we performed a meta-analysis of all published studies on the sex ratio of the mutation frequencies. When the results of six studies were pooled, it was estimated that mutations originated 3.1 times as often in males as in females. The 95% confidence interval was 1.9–4.9. This implies that 80% of mothers of an isolated patient are expected to be haemophilia carriers.  相似文献   

13.
The sample frequency spectrum of a segregating site is the probability distribution of a sample of alleles from a genetic locus, conditional on observing the sample to be polymorphic. This distribution is widely used in population genetic inferences, including statistical tests of neutrality in which a skew in the observed frequency spectrum across independent sites is taken as a signature of departure from neutral evolution. Theoretical aspects of the frequency spectrum have been well studied and several interesting results are available, but they are usually under the assumption that a site has undergone at most one mutation event in the history of the sample. Here, we extend previous theoretical results by allowing for at most two mutation events per site, under a general finite allele model in which the mutation rate is independent of current allelic state but the transition matrix is otherwise completely arbitrary. Our results apply to both nested and nonnested mutations. Only the former has been addressed previously, whereas here we show it is the latter that is more likely to be observed except for very small sample sizes. Further, for any mutation transition matrix, we obtain the joint sample frequency spectrum of the two mutant alleles at a triallelic site, and derive a closed-form formula for the expected age of the younger of the two mutations given their frequencies in the population. Several large-scale resequencing projects for various species are presently under way and the resulting data will include some triallelic polymorphisms. The theoretical results described in this paper should prove useful in population genomic analyses of such data.  相似文献   

14.
Modes of speciation and the neutral theory of biodiversity   总被引:5,自引:0,他引:5  
Hubbell's neutral theory of biodiversity has generated much debate over the need for niches to explain biodiversity patterns. Discussion of the theory has focused on its neutrality assumption, i.e. the functional equivalence of species in competition and dispersal. Almost no attention has been paid to another critical aspect of the theory, the assumptions on the nature of the speciation process. In the standard version of the neutral theory each individual has a fixed probability to speciate. Hence, the speciation rate of a species is directly proportional to its abundance in the metacommunity. We argue that this assumption is not realistic for most speciation modes because speciation is an emergent property of complex processes at larger spatial and temporal scales and, consequently, speciation rate can either increase or decrease with abundance. Accordingly, the assumption that speciation rate is independent of abundance (each species has a fixed probability to speciate) is a more natural starting point in a neutral theory of biodiversity. Here we present a neutral model based on this assumption and we confront this new model to 20 large data sets of tree communities, expecting the new model to fit the data better than Hubbell's original model. We find, however, that the data sets are much better fitted by Hubbell's original model. This implies that species abundance data can discriminate between different modes of speciation, or, stated otherwise, that the mode of speciation has a large impact on the species abundance distribution. Our model analysis points out new ways to study how biodiversity patterns are shaped by the interplay between evolutionary processes (speciation, extinction) and ecological processes (competition, dispersal).  相似文献   

15.
Sune Holm 《Bioethics》2019,33(2):254-260
It has been argued that the precautionary principle is incoherent and thus useless as a guide for regulatory policy. In a recent paper in Bioethics, Wareham and Nardini propose a response to the ‘precautionary paradox’ according to which the precautionary principle's usefulness for decision making in policy and regulation contexts can be justified by appeal to a probability threshold discriminating between negligible and non‐negligible risks. It would be of great significance to debates about risk and precaution if there were a sound method for determining a minimum probability threshold of negligible risk. This is what Wareham and Nardini aim to do. The novelty of their approach is that they suggest that such a threshold should be determined by a method of public deliberation. In this article I discuss the merits of Wareham and Nardini’s public deliberation method for determining thresholds. I raise an epistemic worry about the public deliberation method they suggest, and argue that their proposal is inadequate due to a hidden assumption that the acceptability of a risk can be completely analysed in terms of its probability.  相似文献   

16.
How important is DNA replication for mutagenesis?   总被引:4,自引:0,他引:4  
Rates of mutation and substitution in mammals are generally greater in the germ lines of males. This is usually explained as resulting from the larger number of germ cell divisions during spermatogenesis compared with oogenesis, with the assumption made that mutations occur primarily during DNA replication. However, the rate of cell division is not the only difference between male and female germ lines, and mechanisms are known that can give rise to mutations independently of DNA replication. We investigate the possibility that there are other causes of male-biased mutation. First, we show that patterns of variation at approximately 5,200 short tandem repeat (STR) loci indicate a higher mutation rate in males. We estimate a ratio of male-to-female mutation rates of approximately 1.9. This is significantly greater than 1 and supports a greater rate of mutation in males, affecting the evolution of these loci. Second, we show that there are chromosome-specific patterns of nucleotide and dinucleotide composition in mammals that have been shaped by mutation at CpG dinucleotides. Comparable patterns occur in birds. In mammals, male germ lines are more methylated than female germ lines, and these patterns indicate that differential methylation has played a role in male-biased vertebrate evolution. However, estimates of male mutation bias obtained from both classes of mutation are substantially lower than estimates of cell division bias from anatomical data. This discrepancy, along with published data indicating slipped-strand mispairing arising at STR loci in nonreplicating DNA, suggests that a substantial percentage of mutation may occur in nonreplicating DNA.  相似文献   

17.
Environmental decision-making is complex and often based on multiple lines of evidence. Integrating the information from these multiple lines of evidence is rarely a simple process. We present a quantitative approach to the combination of multiple lines of evidence through calculation of weight-of-evidence, with reference conditions used to define a not impaired state. The approach is risk-based with measurement of risk computed as the probability of impairment. When data on reference conditions are available, there are a variety of methods for calculating this probability. Statistical theory and the use of odds ratios provide a method for combining the measures of risk from the different lines of evidence. The approach is illustrated using data from the Great Lakes to predict the risk at potentially contaminated sites.  相似文献   

18.
In this article, I provide a method to rebuild the active and disabled life expectancy (ALE and DLE) on the basis of 'current' death and disability risks, and to measure disability risk. This method uses national-level data, and is based on two main assumptions. The first is the Gompertz assumption that death rate rises with age exponentially, and the second is the Cox assumption that death rates of active status are proportional to those of disabled status across age. Applying this method to the US data, I find that the disability risk has increased between 1970 and 1990 for both men and women aged 40 and older. Situations in which above assumptions could be removed are also discussed.  相似文献   

19.
Lang GI  Murray AW 《Genetics》2008,178(1):67-82
Although mutation rates are a key determinant of the rate of evolution they are difficult to measure precisely and global mutations rates (mutations per genome per generation) are often extrapolated from the per-base-pair mutation rate assuming that mutation rate is uniform across the genome. Using budding yeast, we describe an improved method for the accurate calculation of mutation rates based on the fluctuation assay. Our analysis suggests that the per-base-pair mutation rates at two genes differ significantly (3.80x10(-10) at URA3 and 6.44x10(-10) at CAN1) and we propose a definition for the effective target size of genes (the probability that a mutation inactivates the gene) that acknowledges that the mutation rate is nonuniform across the genome.  相似文献   

20.
Attributable risk estimation from matched case-control data   总被引:2,自引:0,他引:2  
S J Kuritz  J R Landis 《Biometrics》1988,44(2):355-367
A methodology is proposed for obtaining summary estimators, variances, and confidence intervals for attributable risk measures from data obtained through a case-control study design where one or more controls have been matched to each case. The sampling design for obtaining these data is conceptualized as a simple random sample of cases being equivalent to a simple random sample of matched sets. By combining information across the strata determined by the matched sets, this approach provides all of the benefits associated with the Mantel-Haenszel procedure for the estimators of attributable risk among the exposed and population attributable risk. Asymptotic variances are derived under the assumption that the frequencies of the unique response patterns follow the multinomial distribution. Simulation results indicate that these methods fare very well with respect to bias and coverage probability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号