首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Perpendicular distance models for line transect sampling   总被引:3,自引:0,他引:3  
S T Buckland 《Biometrics》1985,41(1):177-195
Perpendicular distance line transect models are examined to assess whether any single model can provide a general procedure for analysing line transect data. Of the two-parameter models considered, the hazard-rate model appears promising, whereas the exponential power series and exponential quadratic models do not. Of the nonparametric models, the Fourier series is the best developed, and is favoured by many researchers as a general model. However, for a given data set, the Fourier series estimate may be highly dependent on the number of terms selected, and so the model is not a clear improvement over the hazard-rate model. A similar variable-term model, using Hermite polynomials, is considered, and is shown to be less dependent on the number of terms selected. There has been some debate about whether the derivative of the density function of perpendicular distances evaluated at 0 should be 0, so that the function has a "shoulder." The problem is examined in detail, and it is argued that reliable estimation is not possible from line transect data unless a shoulder exists. Many data sets appear to exhibit no shoulder; possible reasons are examined.  相似文献   

2.
Principal component analysis (PCA) is a dimensionality reduction and data analysis tool commonly used in many areas. The main idea of PCA is to represent high-dimensional data with a few representative components that capture most of the variance present in the data. However, there is an obvious disadvantage of traditional PCA when it is applied to analyze data where interpretability is important. In applications, where the features have some physical meanings, we lose the ability to interpret the principal components extracted by conventional PCA because each principal component is a linear combination of all the original features. For this reason, sparse PCA has been proposed to improve the interpretability of traditional PCA by introducing sparsity to the loading vectors of principal components. The sparse PCA can be formulated as an ? 1 regularized optimization problem, which can be solved by proximal gradient methods. However, these methods do not scale well because computation of the exact gradient is generally required at each iteration. Stochastic gradient framework addresses this challenge by computing an expected gradient at each iteration. Nevertheless, stochastic approaches typically have low convergence rates due to the high variance. In this paper, we propose a convex sparse principal component analysis (Cvx-SPCA), which leverages a proximal variance reduced stochastic scheme to achieve a geometric convergence rate. We further show that the convergence analysis can be significantly simplified by using a weak condition which allows a broader class of objectives to be applied. The efficiency and effectiveness of the proposed method are demonstrated on a large-scale electronic medical record cohort.  相似文献   

3.
Important aspects of population evolution have been investigated using nucleotide sequences. Under the neutral Wright–Fisher model, the scaled mutation rate represents twice the average number of new mutations per generations and it is one of the key parameters in population genetics. In this study, we present various methods of estimation of this parameter, analytical studies of their asymptotic behavior as well as comparisons of the distribution's behavior of these estimators through simulations. As knowledge of the genealogy is needed to estimate the maximum likelihood estimator (MLE), an application with real data is also presented, using jackknife to correct the bias of the MLE, which can be generated by the estimation of the tree. We proved analytically that the Waterson's estimator and the MLE are asymptotically equivalent with the same rate of convergence to normality. Furthermore, we showed that the MLE has a better rate of convergence than Waterson's estimator for values of the parameter greater than one and this relationship is reversed when the parameter is less than one.  相似文献   

4.
Budding yeast, Saccharomyces cerevisiae, is commonly used as a system to study cellular ageing. Yeast mother cells are capable of only a limited number of divisions before they undergo senescence, whereas newly formed daughters usually have their replicative age "reset" to zero. Accumulation of extrachromosomal ribosomal DNA circles (ERCs) appears to be an important contributor to ageing in yeast, and we describe a mathematical model that we developed to examine this process. We show that an age-related accumulation of ERCs readily explains the observed features of yeast ageing but that in order to match the experimental survival curves quantitatively, it is necessary that the probability of ERC formation increases with the age of the cell. This implies that some other mechanism(s), in addition to ERC accumulation, must underlie yeast ageing. We also demonstrate that the model can be used to gain insight into how an extra copy of the Sir2 gene might extend lifespan and we show how the model makes novel, testable predictions about patterns of age-specific mortality in yeast populations.  相似文献   

5.
It is commonly asserted that when extrinsic mortality is high, individuals should invest early in reproduction. This intuition thrives in the literature on life-history theory and human behavior, yet it has been criticized repeatedly on the basis of mathematical models. The intuition is indeed wrong; but a recent theoretical criticism has confused the reason why it is wrong, thereby obscuring earlier and sounder criticisms. In the present article, based on the simplest possible model, we sought to clarify these issues. We confirm earlier findings that extrinsic mortality can affect the evolution of pace of life, not because it leaves little time to reproduce, but through its effects on density-dependent competition. This result highlights the importance of accounting for density-dependence in theoretical models and data analyses. Further, we find little support for the recent claim that the direction of selection on a reaction norm in a variable environment cannot be easily inferred from models made in homogeneous environments. In conclusion, although life-history theory is still imperfect, it has provided simple results that deserve to be understood.  相似文献   

6.
Dietary restriction (DR) extends lifespan in multiple species from various taxa. This effect can arise via two distinct but not mutually exclusive ways: a change in aging rate and/or vulnerability to the aging process (i.e. initial mortality rate). When DR affects vulnerability, this lowers mortality instantly, whereas a change in aging rate will gradually lower mortality risk over time. Unraveling how DR extends lifespan is of interest because it may guide toward understanding the mechanism(s) mediating lifespan extension and also has practical implications for the application of DR. We reanalyzed published survival data from 82 pairs of survival curves from DR experiments in rats and mice by fitting Gompertz and also Gompertz–Makeham models. The addition of the Makeham parameter has been reported to improve the estimation of Gompertz parameters. Both models separate initial mortality rate (vulnerability) from an age‐dependent increase in mortality (aging rate). We subjected the obtained Gompertz parameters to a meta‐analysis. We find that DR reduced aging rate without affecting vulnerability. The latter contrasts with the conclusion of a recent analysis of a largely overlapping data set, and we show how the earlier finding is due to a statistical artifact. Our analysis indicates that the biology underlying the life‐extending effect of DR in rodents likely involves attenuated accumulation of damage, which contrasts with the acute effect of DR on mortality reported for Drosophila. Moreover, our findings show that the often‐reported correlation between aging rate and vulnerability does not constrain changing aging rate without affecting vulnerability simultaneously.  相似文献   

7.
Antipredatory benefits are generally considered important inthe evolution and maintenance of animal aggregations. One suchbenefit is the confusion effect: the reduced ease of prey captureexperienced by some predators resulting from an inability tosingle out and attack an individual prey from a group as a resultof cognitive or sensory limitations. Although widely cited,empirical data that do any more than demonstrate the effectare sparse. Here, we use the artificial system of humans attemptingto "capture" images on a computer screen using a computer mouseto explore several hypotheses on the properties of the confusioneffect. This system has the advantage that we can control thebehavior of the prey and eliminate the risk of confounding factorsdue to differential prey behavior and/or phenotypes in groupsof different sizes. One important result of our study is thedemonstration that the confusion effect can occur in the absenceof these confounding factors and indeed in the absence of complexcoordinated behavior between individuals in the prey group (suchas are commonly observed in schooling fish). We also demonstratefor the first time that an individual prey item can still benefitfrom the confusion effect if it is only loosely associated inspace with a larger group of similar prey. Both these resultssuggest that the confusion effect can arise under less specialistcircumstances than previously realized, and so the importanceof this mechanism in shaping aggregation by prey and predator–preyinteractions may be substantially greater than previously considered.  相似文献   

8.
We consider studies of cohorts of individuals after a critical event, such as an injury, with the following characteristics. First, the studies are designed to measure "input" variables, which describe the period before the critical event, and to characterize the distribution of the input variables in the cohort. Second, the studies are designed to measure "output" variables, primarily mortality after the critical event, and to characterize the predictive (conditional) distribution of mortality given the input variables in the cohort. Such studies often possess the complication that the input data are missing for those who die shortly after the critical event because the data collection takes place after the event. Standard methods of dealing with the missing inputs, such as imputation or weighting methods based on an assumption of ignorable missingness, are known to be generally invalid when the missingness of inputs is nonignorable, that is, when the distribution of the inputs is different between those who die and those who live. To address this issue, we propose a novel design that obtains and uses information on an additional key variable-a treatment or externally controlled variable, which if set at its "effective" level, could have prevented the death of those who died. We show that the new design can be used to draw valid inferences for the marginal distribution of inputs in the entire cohort, and for the conditional distribution of mortality given the inputs, also in the entire cohort, even under nonignorable missingness. The crucial framework that we use is principal stratification based on the potential outcomes, here mortality under both levels of treatment. We also show using illustrative preliminary injury data that our approach can reveal results that are more reasonable than the results of standard methods, in relatively dramatic ways. Thus, our approach suggests that the routine collection of data on variables that could be used as possible treatments in such studies of inputs and mortality should become common.  相似文献   

9.
We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.  相似文献   

10.
11.
Two conditions are sufficient to indicate the need for risk-sensitive, adaptive analysis: (i) the outcomes of a behavior must be unpredictable to some degree and (ii) the relationship between outcomes and their value (in terms of fitness or utility) must be nonlinear. We argue that these conditions are common, and develop a general model for the analysis of risk-sensitive fertility behavior when long-term reproductive outcomes are unpredictable. We show that unpredictability is likely to have a patterned effect on fertility behavior, beyond adjustment for expected or average mortality, and we analyze the conditions under which that effect will be to promote or dampen fertility (positive or negative variance compensation, respectively). We then describe the implications of this variance compensation hypothesis (VCH) for the analysis of demographic transitions, agricultural intensification, fertility differences in natural-fertility populations, and clutch size. We conclude by lamenting that data are lacking to test the VCH, probably because it has not been appreciated that there can be patterned and effective behavioral adjustments to stochastic features of the environment.  相似文献   

12.

Background

Copy number variants (CNV) are a potentially important component of the genetic contribution to risk of common complex diseases. Analysis of the association between CNVs and disease requires that uncertainty in CNV copy-number calls, which can be substantial, be taken into account; failure to consider this uncertainty can lead to biased results. Therefore, there is a need to develop and use appropriate statistical tools. To address this issue, we have developed CNVassoc, an R package for carrying out association analysis of common copy number variants in population-based studies. This package includes functions for testing for association with different classes of response variables (e.g. class status, censored data, counts) under a series of study designs (case-control, cohort, etc) and inheritance models, adjusting for covariates. The package includes functions for inferring copy number (CNV genotype calling), but can also accept copy number data generated by other algorithms (e.g. CANARY, CGHcall, IMPUTE).

Results

Here we present a new R package, CNVassoc, that can deal with different types of CNV arising from different platforms such as MLPA o aCGH. Through a real data example we illustrate that our method is able to incorporate uncertainty in the association process. We also show how our package can also be useful when analyzing imputed data when analyzing imputed SNPs. Through a simulation study we show that CNVassoc outperforms CNVtools in terms of computing time as well as in convergence failure rate.

Conclusions

We provide a package that outperforms the existing ones in terms of modelling flexibility, power, convergence rate, ease of covariate adjustment, and requirements for sample size and signal quality. Therefore, we offer CNVassoc as a method for routine use in CNV association studies.  相似文献   

13.
Two-stage design has long been recognized to be a cost-effective way for conducting biomedical studies. In many trials, auxiliary covariate information may also be available, and it is of interest to exploit these auxiliary data to improve the efficiency of inferences. In this paper, we propose a 2-stage design with continuous outcome where the second-stage data is sampled with an "outcome-auxiliary-dependent sampling" (OADS) scheme. We propose an estimator which is the maximizer for an estimated likelihood function. We show that the proposed estimator is consistent and asymptotically normally distributed. The simulation study indicates that greater study efficiency gains can be achieved under the proposed 2-stage OADS design by utilizing the auxiliary covariate information when compared with other alternative sampling schemes. We illustrate the proposed method by analyzing a data set from an environmental epidemiologic study.  相似文献   

14.
15.
The existence of spiteful behaviors remains controversial. Spiteful behaviors are those that are harmful to both the actor and the recipient, and they represent one of the four fundamental types of social behavior (alongside selfishness, altruism, and mutual benefit). It has generally been assumed that the conditions required for spite to evolve are too restrictive, and so spite is unlikely to be important. This idea has been challenged in recent years, with the realization that localized competition can relax the conditions required for spite to evolve. Here we develop a theoretical model for a prime candidate for a spiteful behavior, the production of the sterile soldier caste in polyembryonic wasps. Our results show that (a) the biology of these soldiers is consistent with their main role being to mediate conflict over the sex ratio and not to defend against competitors and (b) greater conflict will occur in more outbred populations. We also show that the production of the sterile soldier caste can be classed as a spiteful behavior but that, to an extent, this is merely a semantic choice, and other interpretations such as altruism or indirect altruism are valid. However, the spite interpretation is useful in that it can lead to a more natural interpretation of relatedness and facilitate the classification of behaviors in a way that emphasizes biologically interesting differences that can be empirically tested.  相似文献   

16.
Cooperative behavior, where one individual incurs a cost to help another, is a wide spread phenomenon. Here we study direct reciprocity in the context of the alternating Prisoner''s Dilemma. We consider all strategies that can be implemented by one and two-state automata. We calculate the payoff matrix of all pairwise encounters in the presence of noise. We explore deterministic selection dynamics with and without mutation. Using different error rates and payoff values, we observe convergence to a small number of distinct equilibria. Two of them are uncooperative strict Nash equilibria representing always-defect (ALLD) and Grim. The third equilibrium is mixed and represents a cooperative alliance of several strategies, dominated by a strategy which we call Forgiver. Forgiver cooperates whenever the opponent has cooperated; it defects once when the opponent has defected, but subsequently Forgiver attempts to re-establish cooperation even if the opponent has defected again. Forgiver is not an evolutionarily stable strategy, but the alliance, which it rules, is asymptotically stable. For a wide range of parameter values the most commonly observed outcome is convergence to the mixed equilibrium, dominated by Forgiver. Our results show that although forgiving might incur a short-term loss it can lead to a long-term gain. Forgiveness facilitates stable cooperation in the presence of exploitation and noise.  相似文献   

17.
Mathew MD  Mathew ND  Ebert PR 《PloS one》2012,7(3):e33483

Background

There are four main phenotypes that are assessed in whole organism studies of Caenorhabditis elegans; mortality, movement, fecundity and size. Procedures have been developed that focus on the digital analysis of some, but not all of these phenotypes and may be limited by expense and limited throughput. We have developed WormScan, an automated image acquisition system that allows quantitative analysis of each of these four phenotypes on standard NGM plates seeded with E. coli. This system is very easy to implement and has the capacity to be used in high-throughput analysis.

Methodology/Principal Findings

Our system employs a readily available consumer grade flatbed scanner. The method uses light stimulus from the scanner rather than physical stimulus to induce movement. With two sequential scans it is possible to quantify the induced phototactic response. To demonstrate the utility of the method, we measured the phenotypic response of C. elegans to phosphine gas exposure. We found that stimulation of movement by the light of the scanner was equivalent to physical stimulation for the determination of mortality. WormScan also provided a quantitative assessment of health for the survivors. Habituation from light stimulation of continuous scans was similar to habituation caused by physical stimulus.

Conclusions/Significance

There are existing systems for the automated phenotypic data collection of C. elegans. The specific advantages of our method over existing systems are high-throughput assessment of a greater range of phenotypic endpoints including determination of mortality and quantification of the mobility of survivors. Our system is also inexpensive and very easy to implement. Even though we have focused on demonstrating the usefulness of WormScan in toxicology, it can be used in a wide range of additional C. elegans studies including lifespan determination, development, pathology and behavior. Moreover, we have even adapted the method to study other species of similar dimensions.  相似文献   

18.
Convergence, i.e., similarity between organisms that is not the direct result of shared phylogenetic history (and that may instead result from independent adaptations to similar environments), is a fundamental issue that lies at the interface of systematics and evolutionary biology. Although convergence is often cited as an important problem in morphological phylogenetics, there have been few well-documented examples of strongly supported and misleading phylogenetic estimates that result from adaptive convergence in morphology. In this article, we propose criteria that can be used to infer whether or not a phylogenetic analysis has been misled by convergence. We then apply these criteria in a study of central Texas cave salamanders (genus Eurycea). Morphological characters (apparently related to cave-dwelling habitat use) support a clade uniting the species E. rathbuni and E. tridentifera, whereas mitochondrial DNA sequences and allozyme data show that these two species are not closely related. We suggest that a likely explanation for the paucity of examples of strongly misleading morphological convergence is that the conditions under which adaptive convergence is most likely to produce strongly misleading results are limited. Specifically, convergence is most likely to be problematic in groups (such as the central Texas Eurycea) in which most species are morphologically very similar and some of the species have invaded and adapted to a novel selective environment.  相似文献   

19.
20.
The temporal durations between events often exert a strong influence over behavior. The details of this influence have been extensively characterized in behavioral experiments in different animal species. A remarkable feature of the data collected in these experiments is that they are often time-scale invariant. This means that response measurements obtained under intervals of different durations coincide when plotted as functions of relative time. Here we describe a biologically plausible model of an interval timing device and show that it is consistent with time-scale invariant behavior over a substantial range of interval durations. The model consists of a set of bistable units that switch from one state to the other at random times. We first use an abstract formulation of the model to derive exact expressions for some key quantities and to demonstrate time-scale invariance for any range of interval durations. We then show how the model could be implemented in the nervous system through a generic and biologically plausible mechanism. In particular, we show that any system that can display noise-driven transitions from one stable state to another can be used to implement the timing device. Our work demonstrates that a biologically plausible model can qualitatively account for a large body of data and thus provides a link between the biology and behavior of interval timing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号