首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
一、引言在以生存期表示的生存数据分析中,最大特点是数据类型是不完全数据,即不是每个数据都是确切的主存时间,它包括死亡数据和截尾数据。根据调查观察的生存数据讨论其分布规律,即生存分布,不仅可研究疾病患者的生存规律,还可以进一步比较对疾病不同治疗方法(包括药物)的效果。在生存分布中,应用较广、效果较好之一的是威布尔分布。导出此分布的物理模式可作这样解释:凡是某一局部损伤或失效将引起整体功能的丧失,则整体失效时间(即寿命)的分布就服从威布尔分布,就像链条的寿命是链条中最弱环节的寿命一样,因而这种模式大体上符合生存—死亡模式。  相似文献   

2.
本文对更一般的结构模型给出了参数的一种常用的仪器变量估计近似分布方差的一种算法.并且给出了未知真值x服从指数分布的例子.此算法对生物科学中统计规律的探讨有一定的应用价值.  相似文献   

3.
F分布与物种总数的统计推断   总被引:1,自引:1,他引:0  
通过两点分布和几何分布模型,给出了物种总数N的点估计,并讨论了它的性质.根据F分布和二项分布、几何分布的关系,本文得到了N的区间估计和检验统计量.  相似文献   

4.
利用DH或RIL群体检测QTL体系并估计其遗传效应   总被引:39,自引:1,他引:38  
章元明  盖钧镒 《遗传学报》2000,27(7):634-640
利用DH和RIKL群体并结合重复内分组随机区组设计对和物产量等遗传率较低的数量性状进行分离分析可提高遗传分析的精度。根据混合分布理论菜了利用DH或RIL群体重复实验数据鉴定数量性状混合遗传模型的分离分析法,特别是2对链锁主基因+多基因模型。该方法可鉴定数量性状的遗传模型和主基因的作用方式,估计主基因、多基因的遗传疚和遗传方差,在两主基因存在连锁可可估计其重组率。下面通过应用举例说明该方法。  相似文献   

5.
刘定富 《遗传》1992,14(3):10-13
木文导出了三向测交设计L,;, L21和L(或合尸;)方差分析中互作项方差分量In*统遗传学。 望。这一期望的导出使得加性遗传分量D和显性遗传分量H的估值可由同一方差分析得出,并且所有 的三向测交家系均得以利用,克服了现有三向测交分析中Lai或L‘在估计HIP‘在估计D和H时未被 利用及完全三向测交设计时D和H由两个基于不同家系资料的方差分析估出的缺陷。本文还给出了确 定显性方向的统计量F的一些其他估计方法。  相似文献   

6.
运用方差均值比率法、负二项参数、丛生指标、平均拥挤度、聚块性指数和扩散系数等7个指标以及双项轨迹方差法,研究了塔里木河中下游荒漠河岸植被建群种胡杨(Populus euphratica)和多枝柽柳(Tamarix ramosissima)幼苗种群的空间分布格局和种间关联性.结果表明,胡杨和多枝柽柳幼苗的空间分布呈显著的聚集分布,聚集规模大小为16 m2.利用2×2列联表χ2统计量、联结系数和共同出现百分率等测度方法,综合分析了2个种对联结性质和程度.结果表明,胡杨和多枝柽柳2个幼苗种群表现出强的正关联,这2种植物种群现处于稳定分布格局,种间共存,占有共同的生态位.  相似文献   

7.
高猛 《生态学报》2016,36(14):4406-4414
最近邻体法是一类有效的植物空间分布格局分析方法,邻体距离的概率分布模型用于描述邻体距离的统计特征,属于常用的最近邻体法之一。然而,聚集分布格局中邻体距离(个体到个体)的概率分布模型表达式复杂,参数估计的计算量大。根据该模型期望和方差的特性,提出了一种简化的参数估计方法,并利用遗传算法来实现参数优化,结果表明遗传算法可以有效地估计的该模型的两个参数。同时,利用该模型拟合了加拿大南温哥华岛3个寒温带树种的空间分布数据,结果显示:该概率分布模型可以很好地拟合美国花旗松(P.menziesii)和西部铁杉(T.heterophylla)的邻体距离分布,但由于西北红柏(T.plicata)存在高度聚集的团簇分布,拟合结果不理想;美国花旗松在样地中近似随机分布,空间聚集参数对空间尺度的依赖性不强,但西北红柏和西部铁杉空间聚集参数具有尺度依赖性,随邻体距离阶数增加而变大。最后,讨论了该模型以及参数估计方法的优势和限制。  相似文献   

8.
针对生物信息学中序列模体的显著性检验问题,提出了一种基于极大似然准则的贝叶斯假设检验方法.将模体的显著性检验转化为多项分布的拟合优度检验问题,选取Dirichlet分布作为多项分布的先验分布并采用Newton-Raphson算法估计Dirichlet分布的超参数,使得数据的预测分布达到最大.应用贝叶斯定理得到贝叶斯因子进行模型选择,用于评价模体检验的统计显著性,这种方法克服了传统多项分布检验中构造检验统计量并计算其在零假设下确切分布的困难.选择JASPAR数据库中107个转录因子结合位点和100组随机模拟数据进行实验,采用皮尔逊积矩相关系数作为评价检验质量的一个标准,发现实验结果好于传统的模体检验的一些方法.  相似文献   

9.
陈瑶生 《遗传学报》1991,18(3):219-227
针对混合家系遗传参数估计,本文在假定公畜方差组分和母畜方差组分相等这一理论基础上,通过对方差分析的期望均方组成分析,提出了新的遗传力估计方法,以及某些特殊情况下的近似估计方法。通过一个估测实例比较了几种遗传力估计方法,结果表明,本文方法与全同胞组分估计最为接近,而且遗传力标准误最小,本文近似估计方法的效果也较好。对各种方法而言,资料越不平衡其差异越大。本文方法可以在一定程度上弥补全同胞分析时,因实际资料的公母畜方差组分差异过大的缺陷,具有实际可行性。此外,由于本文方法是用单因方差分析解决二因方差分析问题,计算更为简便,并可免于计算混合家系平均亲缘相关系数。  相似文献   

10.
多QTL定位的压缩估计方法   总被引:1,自引:0,他引:1  
章元明 《遗传学报》2006,33(10):861-869
本文综述了多标记分析和多QTL定位的压缩估计方法。对于前者,Xu(Genetics,2003,163:789—801)首先提出了Bayesian压缩估计方法。其关键在于让每个效应有一个特定的方差参数,而该方差又服从一定的先验分布,以致能从资料中估计之。由此,能够同时估计大量分子标记基因座的遗传效应,即使大多数标记的效应是可忽略的。然而,对于上位性遗传模型,其运算时间还是过长。为此,笔者将上述思想嵌入极大似然法,提出了惩罚最大似然方法。模拟研究显示:该方法能处理变量个数大于样本容量10倍左右的线性遗传模型。对于后者,本文详细介绍了基于固定区间和可变区间的Bayesian压缩估计方法。固定区间方法可处理中等密度的分子标记资料;可变区间方法则可分析高密度分子标记资料,甚至是上位性遗传模型。对于上位性检测,已介绍的惩罚最大似然方法和可变区间Bayesian压缩估计方法可供利用。应当指出,压缩估计方法在今后的eQTL和QTN定位以及基因互作网络分析等研究中也是有应用价值的。  相似文献   

11.
人口死亡年龄是揭示一个族群健康状况和社会经济条件的重要指标。本文根据海岱地区大汶口文化时期九个墓地人骨遗存的发掘报告,运用定量统计的方法检验了人口死亡年龄分布特征。发现该区大汶口文化时期人口的死亡年龄分布近似服从正态分布。最后探讨了造成人口低死亡年龄的可能原因,并给出了这一概率分布的数学意义以及在史前人口学中的应用前景。  相似文献   

12.

Background

In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study.

Methods

A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured.

Results

All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method.

Conclusions

The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.  相似文献   

13.
In order to obtain reliability information for a white organic light‐emitting diode (OLED), two constant and one step stress tests were conducted with its working current increased. The Weibull function was applied to describe the OLED life distribution, and the maximum likelihood estimation (MLE) and its iterative flow chart were used to calculate shape and scale parameters. Furthermore, the accelerated life equation was determined using the least squares method, a Kolmogorov–Smirnov test was performed to assess if the white OLED life follows a Weibull distribution, and self‐developed software was used to predict the average and the median lifetimes of the OLED. The numerical results indicate that white OLED life conforms to a Weibull distribution, and that the accelerated life equation completely satisfies the inverse power law. The estimated life of a white OLED may provide significant guidelines for its manufacturers and customers. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
The fate of scientific hypotheses often relies on the ability of a computational model to explain the data, quantified in modern statistical approaches by the likelihood function. The log-likelihood is the key element for parameter estimation and model evaluation. However, the log-likelihood of complex models in fields such as computational biology and neuroscience is often intractable to compute analytically or numerically. In those cases, researchers can often only estimate the log-likelihood by comparing observed data with synthetic observations generated by model simulations. Standard techniques to approximate the likelihood via simulation either use summary statistics of the data or are at risk of producing substantial biases in the estimate. Here, we explore another method, inverse binomial sampling (IBS), which can estimate the log-likelihood of an entire data set efficiently and without bias. For each observation, IBS draws samples from the simulator model until one matches the observation. The log-likelihood estimate is then a function of the number of samples drawn. The variance of this estimator is uniformly bounded, achieves the minimum variance for an unbiased estimator, and we can compute calibrated estimates of the variance. We provide theoretical arguments in favor of IBS and an empirical assessment of the method for maximum-likelihood estimation with simulation-based models. As case studies, we take three model-fitting problems of increasing complexity from computational and cognitive neuroscience. In all problems, IBS generally produces lower error in the estimated parameters and maximum log-likelihood values than alternative sampling methods with the same average number of samples. Our results demonstrate the potential of IBS as a practical, robust, and easy to implement method for log-likelihood evaluation when exact techniques are not available.  相似文献   

15.
In combining several tests of significance the individual test statistics are allowed to be stochastically dependent. By choosing the weighted inverse normal method for the combination, the dependency of the original test statistics is then characterized by a correlation of the transformed statistics. For this correlation a confidence region, an unbiased estimator and an unbiased estimate of its variance are derived. The combined test statistic is extended to include the case of possibly dependent original test statistics. Simulation studies show the performance of the actual significance level.  相似文献   

16.
Fracture strength of pharmaceutical compacts varies even for nominally identical samples, which directly affects compaction, comminution, and tablet dosage forms. However, the relationships between porosity and mechanical behavior of compacts are not clear. Here, the effects of porosity on fracture strength and fracture statistics of microcrystalline cellulose compacts were investigated through diametral compression tests. Weibull modulus, a key parameter in Weibull statistics, was observed to decrease with increasing porosity from 17 to 56 vol.%, based on eight sets of compacts at different porosity levels, each set containing ∼50 samples, a total of 407 tests. Normal distribution fits better to fracture data for porosity less than 20 vol.%, whereas Weibull distribution is a better fit in the limit of highest porosity. Weibull moduli from 840 unique finite element simulations of isotropic porous materials were compared to experimental Weibull moduli from this research and results on various pharmaceutical materials. Deviations from Weibull statistics are observed. The effect of porosity on fracture strength can be described by a recently proposed micromechanics-based formula.Key words: diametral compression test, finite element simulations, normal distribution, reliability, Weibull modulus  相似文献   

17.
The traditional variance components approach for quantitative trait locus (QTL) linkage analysis is sensitive to violations of normality and fails for selected sampling schemes. Recently, a number of new methods have been developed for QTL mapping in humans. Most of the new methods are based on score statistics or regression-based statistics and are expected to be relatively robust to non-normality of the trait distribution and also to selected sampling, at least in terms of type I error. Whereas the theoretical development of these statistics is more or less complete, some practical issues concerning their implementation still need to be addressed. Here we study some of these issues such as the choice of denominator variance estimates, weighting of pedigrees, effect of parameter misspecification, effect of non-normality of the trait distribution, and effect of incorporating dominance. We present a comprehensive discussion of the theoretical properties of various denominator variance estimates and of the weighting issue and then perform simulation studies for nuclear families to compare the methods in terms of power and robustness. Based on our analytical and simulation results, we provide general guidelines regarding the choice of appropriate QTL mapping statistics in practical situations.  相似文献   

18.
Eyre-Walker A  Woolfit M  Phelps T 《Genetics》2006,173(2):891-900
The distribution of fitness effects of new mutations is a fundamental parameter in genetics. Here we present a new method by which the distribution can be estimated. The method is fairly robust to changes in population size and admixture, and it can be corrected for any residual effects if a model of the demography is available. We apply the method to extensively sampled single-nucleotide polymorphism data from humans and estimate the distribution of fitness effects for amino acid changing mutations. We show that a gamma distribution with a shape parameter of 0.23 provides a good fit to the data and we estimate that >50% of mutations are likely to have mild effects, such that they reduce fitness by between one one-thousandth and one-tenth. We also infer that <15% of new mutations are likely to have strongly deleterious effects. We estimate that on average a nonsynonymous mutation reduces fitness by a few percent and that the average strength of selection acting against a nonsynonymous polymorphism is approximately 9 x 10(-5). We argue that the relaxation of natural selection due to modern medicine and reduced variance in family size is not likely to lead to a rapid decline in genetic quality, but that it will be very difficult to locate most of the genes involved in complex genetic diseases.  相似文献   

19.
In this article we study some properties of a new family of distributions, namely Exponentiated Exponential distribution, discussed in Gupta , Gupta , and Gupta (1998). The Exponentiated Exponential family has two parameters (scale and shape) similar to a Weibull or a gamma family. It is observed that many properties of this new family are quite similar to those of a Weibull or a gamma family, therefore this distribution can be used as a possible alternative to a Weibull or a gamma distribution. We present two real life data sets, where it is observed that in one data set exponentiated exponential distribution has a better fit compared to Weibull or gamma distribution and in the other data set Weibull has a better fit than exponentiated exponential or gamma distribution. Some numerical experiments are performed to see how the maximum likelihood estimators and their asymptotic results work for finite sample sizes.  相似文献   

20.
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics‐based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号