首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   21篇
  免费   2篇
  2016年   1篇
  2014年   2篇
  2006年   2篇
  2005年   1篇
  2004年   1篇
  2002年   2篇
  2001年   1篇
  2000年   2篇
  1996年   1篇
  1993年   1篇
  1991年   2篇
  1986年   1篇
  1985年   1篇
  1983年   1篇
  1980年   1篇
  1977年   1篇
  1976年   1篇
  1972年   1篇
排序方式: 共有23条查询结果,搜索用时 15 毫秒
1.
In a simple epidemic the only transition in the population is from susceptible to infected and the total population size is fixed for all time. This paper investigates the effect of random initial conditions on the deterministic model for the simple epidemic. By assuming a Beta distribution on the initial proportion of susceptibles, we define a distribution that describes the proportion of susceptibles in a population at any time during an epidemic. The mean and variance for this distribution are derived as hypergeometric functions, and the behavior of these functions is investigated. Lastly, we define a distribution to describe the time until a given proportion of the population remains susceptible. A method for finding the quantiles of this distribution is developed and used to make confidence statements regarding the time until a given proportion of the population is susceptible.  相似文献   
2.
Efficient bootstrap simulation   总被引:6,自引:0,他引:6  
  相似文献   
3.
A paper (Amirnovin R, J Mol Evol 44:473–476, 1997) seems to undermine the validity of the coevolution theory of genetic code origin by shedding doubt on the connection between the biosynthetic relationships between amino acids and the organization of the genetic code, at a time when the literature on the topic takes this for granted. However, as a few papers cite this paper as evidence against the coevolution theory, and to cast aside all doubt on the subject, we have decided to reanalyze the statistical bases on which this theory is founded. We come to the following conclusions: (1) the methods used in the above referred paper contain certain mistakes, and (2) the statistical foundations on which the coevolution theory is based are extremely robust. We have done this by critically appraising Amirnovin's paper and suggesting an alternative method based on the generation of random codes which, along with the method reported in the literature, allows us to evaluate the significance, in the genetic code, of different sets of amino acid pairs in biosynthetic relationships. In particular, by using this method and after building up a certain set of amino acid pairs reflecting the expectations of the coevolution theory, we show that the presence of this set in the genetic code would be obtained, purely by chance, with a probability of 6 × 10−5. This observation seems to provide particularly strong support to the coevolution theory. Received: 28 June 1999 / Accepted: 23 October 1999  相似文献   
4.
Abstract. The first objective of this paper is to define a new measure of fidelity of a species to a vegetation unit, called u. The value of u is derived from the approximation of the binomial or the hypergeometric distribution by the normal distribution. It is shown that the properties of u meet the requirements for a fidelity measure in vegetation science, i.e. (1) to reflect differences of a species’relative frequency inside a certain vegetation unit and its relative frequency in the remainder of the data set; (2) to increase with increasing size of the data set. Additionally (3), u has the property to be dependent on the proportion of the vegetation unit's size to the size of the whole data set. The second objective is to present a method of how to use the value of u for finding species groups in large data bases and for defining vegetation units. A species group is defined by possession of species that show the highest value of u among all species in the data set with regard to the vegetation unit defined by this species group. The vegetation unit is defined as comprising all relevés that include a minimum number of the species in the species group. This minimum number is derived statistically in such a way that fewer relevés always belong to a species group than would be expected if the differential species were distributed randomly among the relevés. An iterative algorithm is described for detecting species groups in data bases. Starting with an initial species group, species composition of this group and the vegetation unit defined by this group are mutually optimized. With this algorithm species groups are formed in a data set independently of each other. Subsequently, these species groups can be combined in such a way that they are suited to define commonly known syntaxa a posteriori.  相似文献   
5.
6.
We develop a Bayesian approach to sample size computations for surveys designed to provide evidence of freedom from a disease or from an infectious agent. A population is considered "disease-free" when the prevalence or probability of disease is less than some threshold value. Prior distributions are specified for diagnostic test sensitivity and specificity and we test the null hypothesis that the prevalence is below the threshold. Sample size computations are developed using hypergeometric sampling for finite populations and binomial sampling for infinite populations. A normal approximation is also developed. Our procedures are compared with the frequentist methods of Cameron and Baldock (1998a, Preventive Veterinary Medicine34, 1-17.) using an example of foot-and-mouth disease. User-friendly programs for sample size calculation and analysis of survey data are available at http://www.epi.ucdavis.edu/diagnostictests/.  相似文献   
7.
It is shown that, in the capture-recapture method, the widely used formulae of Bailey or Chapman-Seber give the most likely value for the size of the population, but systematically underestimate the probability that the population is larger than any given size. We take here a first step in a combinatorial approach which does not suffer from this flaw: formulae are given which can be used in the closed case (no birth, death or migrations between captures) when at least two animals have been recaptured and when there is homogeneity with regard to capture probability. Numerical and heuristic evidence is presented pointing to the fact that the error incurred when using the formulae of Bailey or Chapman-Seber depends asymptotically only on the number of recaptured animals, and will not diminish if the number of captured animals becomes large while the number of recaptured animals remains constant. A result that was stated and left unproven by Darroch is proven here.  相似文献   
8.
John Graunt (1662) was the first to estimate the ratio y/x where y represents the total population and x the known total number of registered births in the same areas during the preceding year. About 1765 Messance (Stephan, 1948) and Moheau (1778) published very carefully prepared estimates for France based on enumeration of population in certain districts and on the count of births, deaths and marriages as reported for the whole country. The districts from which the ratio of inhabitants to birth was determined only constituted a sample. Laplace (1786) prepared similar estimates in 1802 based on a two-stage sampling plan. Recently Hansen and Hurwitz (1943) showed that the ratio estimate (yi/ni)X of Y is unbiased where all xi's are known and the nth cluster is selected with p.p.s. More recently Hájek (1949), Lahiri (1951), Midzuno (1952) and Sen (1952) developed independently the sampling of n clusters with p.p.s to the totals of the sizes of the sample clusters S(xi). Des Raj (1954) and Sen (1952, 1953) gave unbiased estimate of the variance of the estimator which was generally non-negative for samples with smaller probabilities. Rao and Vijayan (1977) gave an unbiased estimator which is non-negative for samples with larger probabilities. Hájek (1949) provided an almost unbiased estimator of the variance of the estimator. The paper discusses situations where Hájek's estimator of variance should be preferred to the Rao-Vijayan estimator and vice versa.  相似文献   
9.
Summary A simple technique of sequential estimation was proposed for capture-recapture census by thePetersen method. In theory this technique makes it possible to secure automatically a required precision level for the population estimate to be obtained, irrespective of the population size. Some problems about its practical application were discussed. This study was supported by science research fund from the Ministry of Education.  相似文献   
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号