首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
Superparasitism as an ESS: to reject or not to reject, that is the question   总被引:1,自引:0,他引:1  
A stochastic model is formulated to determine the optimal strategy for a solitary parasitoid which has discovered an already parasitized host. The model assumes that the parasitoid can count both the number of eggs already present in a host and the number of conspecifics searching in the same patch. The survival probability of an egg is assumed to depend on the total number of eggs in a host. The decision to (super)parasitize depends both on the degree to which the discovered host already is parasitized and on the number of conspecific females searching in the same patch. We consider both the case that egg laying does not involve any costs for the parasitoid and the case that it involves some marginal costs. Uniform behaviour of all the conspecific parasitoids in a patch, i.e. laying one additional egg in all encountered larvae containing a particular number of eggs, appears to be a pure evolutionary stable strategy (ESS). If either the probability that a parasitoid emerges from a host decreases with an increasing degree of parasitism, at least from a particular number of eggs onwards, or if parasitism involves marginal costs, the maximum number of eggs for which it is still profitable to superparasitize a host once more is limited. This number increases with the number of conspecifics searching in the patch. Large marginal costs (i.e. the expected gain of not parasitizing now) decrease the profit of superparasitism. For newly emerged parasitoids the rejection of an already parasitized host is not advantageous as long as the marginal costs of parasitism are small, because the host can never contain an egg of its own.  相似文献   

2.
Agreement between raters for binary outcome data is typically assessed using the kappa coefficient. There has been considerable recent work extending logistic regression to provide summary estimates of interrater agreement adjusted for covariates predictive of the marginal probability of classification by each rater. We propose an estimating equations approach which can also be used to identify covariates predictive of kappa. Models may include an arbitrary and variable number of raters per subject and yet do not require any stringent parametric assumptions. Examples used to illustrate this procedure include an investigation of factors affecting agreement between primary and proxy respondents from a case‐control study and a study of the effects of gender and zygosity on twin concordance for smoking history.  相似文献   

3.
Agreement coefficients quantify how well a set of instruments agree in measuring some response on a population of interest. Many standard agreement coefficients (e.g. kappa for nominal, weighted kappa for ordinal, and the concordance correlation coefficient (CCC) for continuous responses) may indicate increasing agreement as the marginal distributions of the two instruments become more different even as the true cost of disagreement stays the same or increases. This problem has been described for the kappa coefficients; here we describe it for the CCC. We propose a solution for all types of responses in the form of random marginal agreement coefficients (RMACs), which use a different adjustment for chance than the standard agreement coefficients. Standard agreement coefficients model chance agreement using expected agreement between two independent random variables each distributed according to the marginal distribution of one of the instruments. RMACs adjust for chance by modeling two independent readings both from the mixture distribution that averages the two marginal distributions. In other words, both independent readings represent first a random choice of instrument, then a random draw from the marginal distribution of the chosen instrument. The advantage of the resulting RMAC is that differences between the two marginal distributions will not induce greater apparent agreement. As with the standard agreement coefficients, the RMACs do not require any assumptions about the bivariate distribution of the random variables associated with the two instruments. We describe the RMAC for nominal, ordinal and continuous data, and show through the delta method how to approximate the variances of some important special cases.  相似文献   

4.
The process of nonindigenous species (NIS) arrival has received limited theoretical consideration despite importance in predicting and preventing the establishment of NIS. We formulate a mechanistically based hierarchical model of NIS arrival and demonstrate simplifications leading to a marginal distribution of the number of surviving introduced individuals from parameters of survival probability and propagule pressure. The marginal distribution is extended as a stochastic process from which establishment emerges with a waiting time distribution. This provides a probability of NIS establishment within a specified period and may be useful for identifying patterns of successful invaders. However, estimates of both the propagule pressure and the individual survival probability are rarely available for NIS, making estimates of the probability of establishment difficult. Alternatively, researchers are able to measure proportional estimates of propagule pressure through models of NIS transport, such as gravity models, or of survival probability through habitat-matching indexes measuring the similarity between potentially occupied and native NIS ranges. Therefore, we formulate the relative waiting time between two locations and the probability of one location being invaded before the other.  相似文献   

5.
Uebersax JS  Grove WM 《Biometrics》1993,49(3):823-835
This article presents a latent distribution model for the analysis of agreement on dichotomous or ordered category ratings. The model includes parameters that characterize bias, category definitions, and measurement error for each rater or test. Parameter estimates can be used to evaluate rater performance and to improve classification or measurement with use of multiple ratings. A simple maximum likelihood estimation procedure is described. Two examples illustrate the approach. Although considered in the context of analyzing rater agreement, the model provides a general approach for mixture analysis using two or more ordered-caregory measures.  相似文献   

6.
T R Fears  C C Brown 《Biometrics》1986,42(4):955-960
There are a number of possible designs for case-control studies. The simplest uses two separate simple random samples, but an actual study may use more complex sampling procedures. Typically, stratification is used to control for the effects of one or more risk factors in which we are interested. It has been shown (Anderson, 1972, Biometrika 59, 19-35; Prentice and Pyke, 1979, Biometrika 66, 403-411) that the unconditional logistic regression estimators apply under stratified sampling, so long as the logistic model includes a term for each stratum. We consider the case-control problem with stratified samples and assume a logistic model that does not include terms for strata, i.e., for fixed covariates the (prospective) probability of disease does not depend on stratum. We assume knowledge of the proportion sampled in each stratum as well as the total number in the stratum. We use this knowledge to obtain the maximum likelihood estimators for all parameters in the logistic model including those for variables completely associated with strata. The approach may also be applied to obtain estimators under probability sampling.  相似文献   

7.
We consider the evolution of a trait, which is under both genetic and phenotypic transmission. An individual is always born in one state but can be converted to the other before reaching adulthood. If the conversion takes place by a learning process, the native state is called “unskilled,” and that acquired by learning is called “skilled.” If phenotypic conversion takes place by way of infection, the native state is uninfected, and can be converted to infected. Native and converted phenotypes may be subject to selection; acquiring a skill may lead to selective advantage of skilled versus unskilled, while contracting a disease may involve a selective disadvantage. Conversion probability is a function of the parental phenotypes. In some of our models we assume that only one parent has teaching ability (or transmits the disease) and in others we consider more general situations. The probability of learning (or of taking the disease) may be determined by the individual's genotype. A diallelic locus is considered. The evolution of the genotypes and the phenotypes is studied in a variety of situations. Equilibria, and in a few simple cases the dynamics of the phenotypes and genotypes in the population are given. The usual equilibrium for heterozygote advantage is found to depend, in the present case, on the parameters of the learning process. Oscillatory equilibria and more than one stable equilibrium can exist in certain circumstances. Even in the absence of genotypic differences for the conversion probability gene frequencies may change.  相似文献   

8.
Roy J 《Biometrics》2003,59(4):829-836
In longitudinal studies with dropout, pattern-mixture models form an attractive modeling framework to account for nonignorable missing data. However, pattern-mixture models assume that the components of the mixture distribution are entirely determined by the dropout times. That is, two subjects with the same dropout time have the same distribution for their response with probability one. As that is unlikely to be the case, this assumption made lead to classification error. In addition, if there are certain dropout patterns with very few subjects, which often occurs when the number of observation times is relatively large, pattern-specific parameters may be weakly identified or require identifying restrictions. We propose an alternative approach, which is a latent-class model. The dropout time is assumed to be related to the unobserved (latent) class membership, where the number of classes is less than the number of observed patterns; a regression model for the response is specified conditional on the latent variable. This is a type of shared-parameter model, where the shared "parameter" is discrete. Parameter estimates are obtained using the method of maximum likelihood. Averaging the estimates of the conditional parameters over the distribution of the latent variable yields estimates of the marginal regression parameters. The methodology is illustrated using longitudinal data on depression from a study of HIV in women.  相似文献   

9.
This work is motivated by clinical trials in chronic heart failure disease, where treatment has effects both on morbidity (assessed as recurrent non‐fatal hospitalisations) and on mortality (assessed as cardiovascular death, CV death). Recently, a joint frailty proportional hazards model has been proposed for these kind of efficacy outcomes to account for a potential association between the risk rates for hospital admissions and CV death. However, more often clinical trial results are presented by treatment effect estimates that have been derived from marginal proportional hazards models, that is, a Cox model for mortality and an Andersen–Gill model for recurrent hospitalisations. We show how these marginal hazard ratios and their estimates depend on the association between the risk processes, when these are actually linked by shared or dependent frailty terms. First we derive the marginal hazard ratios as a function of time. Then, applying least false parameter theory, we show that the marginal hazard ratio estimate for the hospitalisation rate depends on study duration and on parameters of the underlying joint frailty model. In particular, we identify parameters, for example the treatment effect on mortality, that determine if the marginal hazard ratio estimate for hospitalisations is smaller, equal or larger than the conditional one. How this affects rejection probabilities is further investigated in simulation studies. Our findings can be used to interpret marginal hazard ratio estimates in heart failure trials and are illustrated by the results of the CHARM‐Preserved trial (where CHARM is the ‘Candesartan in Heart failure Assessment of Reduction in Mortality and morbidity’ programme).  相似文献   

10.
Gregorius HR  Ross MD  Gillet EM 《Genetics》1983,103(3):529-544
A one-locus two-allele model of trioecy (presence of hermaphrodites, males and females in one population) is considered, in order to study the conditions for the persistence of this system. All possible assignments of the three sex types to the three genotypes are considered. This leads to three different modes of inheritance of trioecy, namely (a) females heterozygous, (b) males heterozygous and (c) hermaphrodites heterozygous, where in each mode each of the remaining two sex types is homozygous for one of the alleles. For mode (c) trioecy is always persistent, and the dependence of the sex ratio (for the three sex types) on the ovule and pollen fertilities and on the hermaphrodite selfing rate is specified. For the other two modes, (a) and (b), trioecy is not protected, i.e., it may not persist for any fertilities, viabilities or selfing rates. Thus, in this situation it is important to study the conditions under which the "marginal" systems of sexuality of trioecy, i.e., hermaphroditism, dioecy and gynodioecy in mode (a), and hermaphroditism, dioecy and androdioecy in mode (b), may become established. The results show that each marginal system may evolve from each other via trioecy. The evolution of dioecy is easier in mode (a) than in (b), so that female heterogamety would be expected to occur more often than male heterogamety in the present model. Under some conditions the breeding system obtained in equilibrium populations may depend on the initial genotype frequencies.—The necessity of considering modes of inheritance for sexual polymorphisms is demonstrated by comparing our results with those obtained from an evolutionary stable strategy (ESS) analysis of a purely phenotypic model.  相似文献   

11.
Explaining the evolution of cooperation remains one of the greatest problems for both biology and social science. The classical theories of cooperation suggest that cooperation equilibrium or evolutionary stable strategy between partners can be maintained through genetic similarity or reciprocity relatedness. These classical theories are based on an assumption that partners interact symmetrically with equal payoffs in a game of cooperation interaction. However, the payoff between partners is usually not equal and therefore they often interact asymmetrically in real cooperative systems. With the Hawk-Dove model, we find that the probability of cooperation between cooperative partners will depend closely on the payoff ratio. The higher the payoff ratio between recipients and cooperative actors, the greater will be the probability of cooperation interaction between involved partners. The greatest probability of conflict between cooperative partners will occur when the payoff between partners is equal. The results show that this asymmetric relationship is one of the key dynamics of the evolution of cooperation, and that pure cooperation strategy (i.e., Nash equilibrium) does not exist in asymmetrical cooperation systems, which well explains the direct conflict observed in almost all of the well documented cooperation systems. The model developed here shows that the cost-to-benefit ratio of cooperation is also negatively correlated with the probability of cooperation interaction. A smaller cost-to-benefit ratio of cooperation might be created by the limited dispersal ability or exit cost of the partners involved, and it will make the punishment of the non-cooperative individuals by the recipient more credible, and therefore make it more possible to maintain stable cooperation interaction.  相似文献   

12.
In many observational studies, individuals are measured repeatedly over time, although not necessarily at a set of pre-specified occasions. Instead, individuals may be measured at irregular intervals, with those having a history of poorer health outcomes being measured with somewhat greater frequency and regularity. In this paper, we consider likelihood-based estimation of the regression parameters in marginal models for longitudinal binary data when the follow-up times are not fixed by design, but can depend on previous outcomes. In particular, we consider assumptions regarding the follow-up time process that result in the likelihood function separating into two components: one for the follow-up time process, the other for the outcome measurement process. The practical implication of this separation is that the follow-up time process can be ignored when making likelihood-based inferences about the marginal regression model parameters. That is, maximum likelihood (ML) estimation of the regression parameters relating the probability of success at a given time to covariates does not require that a model for the distribution of follow-up times be specified. However, to obtain consistent parameter estimates, the multinomial distribution for the vector of repeated binary outcomes must be correctly specified. In general, ML estimation requires specification of all higher-order moments and the likelihood for a marginal model can be intractable except in cases where the number of repeated measurements is relatively small. To circumvent these difficulties, we propose a pseudolikelihood for estimation of the marginal model parameters. The pseudolikelihood uses a linear approximation for the conditional distribution of the response at any occasion, given the history of previous responses. The appeal of this approximation is that the conditional distributions are functions of the first two moments of the binary responses only. When the follow-up times depend only on the previous outcome, the pseudolikelihood requires correct specification of the conditional distribution of the current outcome given the outcome at the previous occasion only. Results from a simulation study and a study of asymptotic bias are presented. Finally, we illustrate the main results using data from a longitudinal observational study that explored the cardiotoxic effects of doxorubicin chemotherapy for the treatment of acute lymphoblastic leukemia in children.  相似文献   

13.
A phase-contrast microscopic procedure of evaluation, measurement and classification which can be simply performed on erythrocyte marginal zones (ERZ) of particularly prepared and coloured smears is described. Classification is made with the help of a classification table and a picture series table after measuring the erythrocyte marginal zones with an ocular measuring plate or with a measuring screw ocular respectively. The same preparing and colouring technique also enables erythrocyte marginal zones to be evaluated with an automatic picture analysis and with the scanning electron microscope. With the help of this technique, which can be applied in human erythrocytes as well as in those of dogs, rats and fish, changes of erythrocyte margin zones which may be caused by haematological, immunological and toxicological processes may be determined in quantity. The mode of action of this preparation technique is dealt with in the discussion.  相似文献   

14.
Size control models of Saccharomyces cerevisiae cell proliferation.   总被引:6,自引:2,他引:4       下载免费PDF全文
By using time-lapse photomicroscopy, the individual cycle times and sizes at bud emergence were measured for a population of saccharomyces cerevisiae cells growing exponentially under balanced growth conditions in a specially constructed filming slide. There was extensive variability in both parameters for daughter and parent cells. The data on 162 pairs of siblings were analyzed for agreement with the predictions of the transition probability hypothesis and the critical-size hypothesis of yeast cell proliferation and also with a model incorporating both of these hypotheses in tandem. None of the models accounted for all of the experimental data, but two models did give good agreement to all of the data. The wobbly tandem model proposes that cells need to attain a critical size, which is very variable, enabling them to enter a start state from which they exit with first order kinetics. The sloppy size control model suggests that cells have an increasing probability per unit time of traversing start as they increase in size, reaching a high plateau value which is less than one. Both models predict that the kinetics of entry into the cell division sequence will strongly depend on variability in birth size and thus will be quite different for daughters and parents of the asymmetrically dividing yeast cells. Mechanisms underlying these models are discussed.  相似文献   

15.
Regression with frailty in survival analysis   总被引:5,自引:0,他引:5  
In studies of survival, the hazard function for each individual may depend on observed risk variables but usually not all such variables are known or measurable. This unknown factor of the hazard function is usually termed the individual heterogeneity or frailty. When survival is time to the occurrence of a particular type of event and more than one such time may be obtained for each individual, frailty is a common factor among such recurrence times. A model including frailty is fitted to such repeated measures of recurrence times.  相似文献   

16.
Cluster randomized trials (CRTs) frequently recruit a small number of clusters, therefore necessitating the application of small-sample corrections for valid inference. A recent systematic review indicated that CRTs reporting right-censored, time-to-event outcomes are not uncommon and that the marginal Cox proportional hazards model is one of the common approaches used for primary analysis. While small-sample corrections have been studied under marginal models with continuous, binary, and count outcomes, no prior research has been devoted to the development and evaluation of bias-corrected sandwich variance estimators when clustered time-to-event outcomes are analyzed by the marginal Cox model. To improve current practice, we propose nine bias-corrected sandwich variance estimators for the analysis of CRTs using the marginal Cox model and report on a simulation study to evaluate their small-sample properties. Our results indicate that the optimal choice of bias-corrected sandwich variance estimator for CRTs with survival outcomes can depend on the variability of cluster sizes and can also slightly differ whether it is evaluated according to relative bias or type I error rate. Finally, we illustrate the new variance estimators in a real-world CRT where the conclusion about intervention effectiveness differs depending on the use of small-sample bias corrections. The proposed sandwich variance estimators are implemented in an R package CoxBcv .  相似文献   

17.
The classical theory of island biogeography has as its basic variable the presence or absence of species on entire islands, and as its basic processes colonization and extinction rates on entire islands as functions of island area, distance, and so forth. Yet for many organisms with limited dispersal abilities, it may be more reasonable to consider larger islands as comprised of an ensemble of local populations coupled by within-island dispersal. Conceptual arguments and a simple patch occupancy model are used to examine the potential relevance of such internal spatial dynamics in explaining area effects, expressed via the probability that a species is present per unit area as a function of total island area. The model suggests that strong area effects depend on a rather fine balance between local colonization and extinction rates. A fruitful direction of future research should be the application of patch dynamic theory to classic island biogeographic questions and systems.  相似文献   

18.
Annotation of the rapidly accumulating body of sequence data relies heavily on the detection of remote homologues and functional motifs in protein families. The most popular methods rely on sequence alignment. These include programs that use a scoring matrix to compare the probability of a potential alignment with random chance and programs that use curated multiple alignments to train profile hidden Markov models (HMMs). Related approaches depend on bootstrapping multiple alignments from a single sequence. However, alignment-based programs have limitations. They make the assumption that contiguity is conserved between homologous segments, which may not be true in genetic recombination or horizontal transfer. Alignments also become ambiguous when sequence similarity drops below 40%. This has kindled interest in classification methods that do not rely on alignment. An approach to classification without alignment based on the distribution of contiguous sequences of four amino acids (4-grams) was developed. Interest in 4-grams stemmed from the observation that almost all theoretically possible 4-grams (20(4)) occur in natural sequences and the majority of 4-grams are uniformly distributed. This implies that the probability of finding identical 4-grams by random chance in unrelated sequences is low. A Bayesian probabilistic model was developed to test this hypothesis. For each protein family in Pfam-A and PIR-PSD, a feature vector called a probe was constructed from the set of 4-grams that best characterised the family. In rigorous jackknife tests, unknown sequences from Pfam-A and PIR-PSD were compared with the probes for each family. A classification result was deemed a true positive if the probe match with the highest probability was in first place in a rank-ordered list. This was achieved in 70% of cases. Analysis of false positives suggested that the precision might approach 85% if selected families were clustered into subsets. Case studies indicated that the 4-grams in common between an unknown and the best matching probe correlated with functional motifs from PRINTS. The results showed that remote homologues and functional motifs could be identified from an analysis of 4-gram patterns.  相似文献   

19.
A priority of the Global Polio Eradication Initiative (GPEI) 2013–2018 strategic plan is to evaluate the potential impact on polio eradication resulting from expanding one or more Supplementary Immunization Activities (SIAs) to children beyond age five-years in polio endemic countries. It has been hypothesized that such expanded age group (EAG) campaigns could accelerate polio eradication by eliminating immunity gaps in older children that may have resulted from past periods of low vaccination coverage. Using an individual-based mathematical model, we quantified the impact of EAG campaigns in terms of probability of elimination, reduction in polio transmission and age stratified immunity levels. The model was specifically calibrated to seroprevalence data from a polio-endemic region: Zaria, Nigeria. We compared the impact of EAG campaigns, which depend only on age, to more targeted interventions which focus on reaching missed populations. We found that EAG campaigns would not significantly improve prospects for polio eradication; the probability of elimination increased by 8% (from 24% at baseline to 32%) when expanding three annual SIAs to 5–14 year old children and by 18% when expanding all six annual SIAs. In contrast, expanding only two of the annual SIAs to target hard-to-reach populations at modest vaccination coverage—representing less than one tenth of additional vaccinations required for the six SIA EAG scenario—increased the probability of elimination by 55%. Implementation of EAG campaigns in polio endemic regions would not improve prospects for eradication. In endemic areas, vaccination campaigns which do not target missed populations will not benefit polio eradication efforts.  相似文献   

20.
Pang Z  Kuk AY 《Biometrics》2005,61(4):1076-1084
Existing distributions for modeling fetal response data in developmental toxicology such as the beta-binomial distribution have a tendency of inflating the probability of no malformed fetuses, and hence understating the risk of having at least one malformed fetus within a litter. As opposed to a shared probability extra-binomial model, we advocate a shared response model that allows a random number of fetuses within the same litter to share a common response. An explicit formula is given for the probability function and graphical plots suggest that it does not suffer from the problem of assigning too much probability to the event of no malformed fetuses. The EM algorithm can be used to estimate the model parameters. Results of a simulation study show that the EM estimates are nearly unbiased and the associated confidence intervals based on the usual standard error estimates have coverage close to the nominal level. Simulation results also suggest that the shared response model estimates of the marginal malformation probabilities are robust to misspecification of the distributional form, but not so for the estimates of intralitter correlation and the litter-level probability of having at least one malformed fetus. The proposed model is fitted to a set of data from the U.S. National Toxicology Program. For the same dose-response relationship, the fit based on the shared response distribution is superior to that based on the beta-binomial, and comparable to that based on the recently proposed q-power distribution (Kuk, 2004, Applied Statistics53, 369-386). An advantage of the shared response model over the q-power distribution is that it is more interpretable and can be extended more easily to the multivariate case. To illustrate this, a bivariate shared response model is fitted to fetal response data involving visceral and skeletal malformation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号