首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
孙女设计中标记密度对QTL定位精确性的影响   总被引:5,自引:2,他引:5  
王菁  张勤  张沅 《遗传学报》2000,27(7):590-598
采用蒙特卡罗方法分析了在孙女设计中不同的嫩体结构、性状遗传力、QTL效应大小和QTL在染色体上的位置中个因素不同水平组合下4种标记密度(标记间隔5cM,10cM,20cM、50cM对QTL定位精确性(以均方误MSE为衡量指标)的影响,并从经济学角度探讨了应用于标记辅助选(MAS)的QTL定位的最佳标记密度。结果表明,一般说来,在各因素水平都较低时,MSE随标记密度加大而下降的相对幅度也较 小,反之  相似文献   

2.
DNA typing offers a unique opportunity to identify individuals for medical and forensic purposes. Probabilistic inference regarding the chance occurrence of a match between the DNA type of an evidentiary sample and that of an accused suspect, however, requires reliable estimation of genotype and allele frequencies in the population. Although population-based data on DNA typing at several hypervariable loci are being accumulated at various laboratories, a rigorous treatment of the sample size needed for such purposes has not been made from population genetic considerations. It is shown here that the loci that are potentially most useful for forensic identification of individuals have the intrinsic property that they involve a large number of segregating alleles, and a great majority of these alleles are rare. As a consequence, because of the large number of possible genotypes at the hypervariable loci that offer the maximum potential for individualization, the sample size needed to observe all possible genotypes in a sample is large. In fact, the size is so large that even if such a huge number of individuals could be sampled, it could not be guaranteed that such a sample was drawn from a single homogeneous population. Therefore adequate estimation of genotypic probabilities must be based on allele frequencies, and the sample size needed to represent all possible alleles is far more reasonable. Further economization of sample size is possible if one wants to have representation of only the frequent alleles in the sample, so that the rare allele frequencies can be approximated by an upper bound for forensic applications.  相似文献   

3.
Indentation has several advantages as a loading mode for determining constitutive behavior of soft, biological tissues. However, indentation induces a complex, spatially heterogeneous deformation field that creates analytical challenges for the calculation of constitutive parameters. As a result, investigators commonly assume small indentation depths and large sample thicknesses to simplify analysis and then restrict indentation depth and sample geometry to satisfy these assumptions. These restrictions limit experimental resolution in some fields, such as brain biomechanics. However, recent experimental evidence suggests that conventionally applied limits are in fact excessively conservative. We conducted a parametric study of indentation loading with various indenter geometries, surface interface conditions, sample compressibility, sample geometry and indentation depth to quantitatively describe the deviation from previous treatments that results from violation of the assumptions of small indentation depth and large sample thickness. We found that the classical solution was surprisingly robust to violation of the assumption of small strain but highly sensitive to violation of the assumption of large sample thickness, particularly if the indenter was cylindrical. The ramifications of these findings for design of indentation experiments are discussed and correction factors are presented to allow future investigators to account for these effects without recreating our finite element models.  相似文献   

4.
Polygenic risk scores have shown great promise in predicting complex disease risk and will become more accurate as training sample sizes increase. The standard approach for calculating risk scores involves linkage disequilibrium (LD)-based marker pruning and applying a p value threshold to association statistics, but this discards information and can reduce predictive accuracy. We introduce LDpred, a method that infers the posterior mean effect size of each marker by using a prior on effect sizes and LD information from an external reference panel. Theory and simulations show that LDpred outperforms the approach of pruning followed by thresholding, particularly at large sample sizes. Accordingly, predicted R2 increased from 20.1% to 25.3% in a large schizophrenia dataset and from 9.8% to 12.0% in a large multiple sclerosis dataset. A similar relative improvement in accuracy was observed for three additional large disease datasets and for non-European schizophrenia samples. The advantage of LDpred over existing methods will grow as sample sizes increase.  相似文献   

5.
6.
S Engen 《Biometrics》1975,31(1):201-208
A taxonomic group will frequently have a large number of species with small abundances. When a sample is drawn at random from this group, one is therefore faced with the problem that a large proportion of the species will not be discovered. A general definition of quantitative measures of "sample coverage" is proposed, and the problem of statistical inference is considered for two special cases, (1) the actual total relative abundance of those species that are represented in the sample, and (2) their relative contribution to the information index of diversity. The analysis is based on a extended version of the negative binomial species frequency model. The results are tabulated.  相似文献   

7.
Wakeley J  Lessard S 《Genetics》2003,164(3):1043-1053
We develop predictions for the correlation of heterozygosity and for linkage disequilibrium between two loci using a simple model of population structure that includes migration among local populations, or demes. We compare the results for a sample of size two from the same deme (a single-deme sample) to those for a sample of size two from two different demes (a scattered sample). The correlation in heterozygosity for a scattered sample is surprisingly insensitive to both the migration rate and the number of demes. In contrast, the correlation in heterozygosity for a single-deme sample is sensitive to both, and the effect of an increase in the number of demes is qualitatively similar to that of a decrease in the migration rate: both increase the correlation in heterozygosity. These same conclusions hold for a commonly used measure of linkage disequilibrium (r(2)). We compare the predictions of the theory to genomic data from humans and show that subdivision might account for a substantial portion of the genetic associations observed within the human genome, even though migration rates among local populations of humans are relatively large. Because correlations due to subdivision rather than to physical linkage can be large even in a single-deme sample, then if long-term migration has been important in shaping patterns of human polymorphism, the common practice of disease mapping using linkage disequilibrium in "isolated" local populations may be subject to error.  相似文献   

8.
Separation of immunoglobulin A from a large sample of human serum from patients with multiple myeloma was done by utilizing a large zone electrophoresis apparatus.  相似文献   

9.
Nei M 《Genetics》1978,89(3):583-590
The magnitudes of the systematic biases involved in sample heterozygosity and sample genetic distances are evaluated, and formulae for obtaining unbiased estimates of average heterozygosity and genetic distance are developed. It is also shown that the number of individuals to be used for estimating average heterozygosity can be very small if a large number of loci are studied and the average heterozygosity is low. The number of individuals to be used for estimating genetic distance can also be very small if the genetic distance is large and the average heterozygosity of the two species compared is low.  相似文献   

10.

Background  

The relationship between stroke risk and cognitive function has not previously been examined in a large community living sample other than the Framingham cohort. The objective of this study was to examine the relationship between 10-year risk for incident stroke and cognitive function in a large population-based sample.  相似文献   

11.
Homozygosity-based statistics such as Ohta's identity-in-state (IIS) excess offer the potential to measure linkage disequilibrium for multiallelic loci in small samples. However, previous observations have suggested that for independent loci, in small samples these statistics might produce values that more frequently lie on one side rather than on the other side of zero. Here we investigate the sampling properties of the IIS excess. We find that for any pair of independent polymorphic loci, as sample size n approaches infinity, the sampling distribution of the IIS excess approaches a normal distribution. For large samples, the IIS excess tends towards symmetry around zero, and the probabilities of positive and of negative IIS excess both approach 1/2. Surprisingly, however, we also find that for sufficiently large n, independent loci can be chosen so that the probability of a sample having positive IIS excess is arbitrarily close to either 0 or 1. The results are applied to interpretation of data from human populations, and we conclude that before employing homozygosity-based statistics to measure LD in a particular sample, especially for loci with either very small or very large homozygosities, it is useful to verify that loci with the observed homozygosity values are not likely to produce a large bias in IIS excess in samples of the given size.  相似文献   

12.
We study the properties of gene genealogies for large samples using a continuous approximation introduced by R. A. Fisher. We show that the major effect of large sample size, relative to the effective size of the population, is to increase the proportion of polymorphisms at which the mutant type is found in a single copy in the sample. We derive analytical expressions for the expected number of these singleton polymorphisms and for the total number of polymorphic, or segregating, sites that are valid even when the sample size is much greater than the effective size of the population. We use simulations to assess the accuracy of these predictions and to investigate other aspects of large-sample genealogies. Lastly, we apply our results to some data from Pacific oysters sampled from British Columbia. This illustrates that, when large samples are available, it is possible to estimate the mutation rate and the effective population size separately, in contrast to the case of small samples in which only the product of the mutation rate and the effective population size can be estimated.  相似文献   

13.
NOETHER (1987) proposed a method of sample size determination for the Wilcoxon-Mann-Whitney test. To obtain a sample size formula, he restricted himself to alternatives that differ only slightly from the null hypothesis, so that the unknown variance o2 of the Mann-Whitney statistic can be approximated by the known variance under the null hypothesis which depends only on n. This fact is frequently forgotten in statistical practice. In this paper, we compare Noether's large sample solution against an alternative approach based on upper bounds of σ2 which is valid for any alternatives. This comparison shows that Noether's approximation is sufficiently reliable with small and large deviations from the null hypothesis.  相似文献   

14.
There are two cases in double sampling; case(i) when the second sample is a sub-sample from preliminary large sample, and case(ii) when the second sample is not a sub-sample from the preliminary large sample. Recently SISODIA and DWIVEDI (1981) proposed a ratio cum product-type estimator in double sampling in which they have studied the properties of this estimator under case (i). In this paper, we have made an attempt to study the properties of the same estimator under case (ii). It is found that the estimator is superior than double sampling linear regression estimator, usual ratio estimator, product estimator and among others. The estimator is also compared with simple mean per unit for a given cost of the survey.  相似文献   

15.
插入位点分析对于金针菇功能基因组学的研究极为重要,分析方法常用反向PCR、热不对称交错PCR、Tail-PCR、染色体步移等,存在操作复杂、消耗时间长、特异性较差、效率低等缺点。近年来开始应用基因组重测序的方法,对转化子逐一测序与分析,工作量较大、费用较高。本研究应用矩阵设计,把多个转化子的DNA混合构成样品池,重测序后分析插入位点,M个样品池的测序数据可分析M×(M+1)/2个转化子的插入位点。应用矩阵设计构建6个样品池检测21个转化子,获得21个插入位点,表明这种方法可行、适合大样本分析,如突变体库。  相似文献   

16.
Historically, linkage mapping populations have consisted of large, randomly selected samples of progeny from a given pedigree or cell lines from a panel of radiation hybrids. We demonstrate that, to construct a map with high genome-wide marker density, it is neither necessary nor desirable to genotype all markers in every individual of a large mapping population. Instead, a reduced sample of individuals bearing complementary recombinational or radiation-induced breakpoints may be selected for genotyping subsequent markers from a large, but sparsely genotyped, mapping population. Choosing such a sample can be reduced to a discrete stochastic optimization problem for which the goal is a sample with breakpoints spaced evenly throughout the genome. We have developed several different methods for selecting such samples and have evaluated their performance on simulated and actual mapping populations, including the Lister and Dean Arabidopsis thaliana recombinant inbred population and the GeneBridge 4 human radiation hybrid panel. Our methods quickly and consistently find much-reduced samples with map resolution approaching that of the larger populations from which they are derived. This approach, which we have termed selective mapping, can facilitate the production of high-quality, high-density genome-wide linkage maps.  相似文献   

17.
A general model for sample size determination for collecting germplasm   总被引:2,自引:0,他引:2  
The paper develops a general model for determining the minimum sample size for collecting germplasm for genetic conservation with an overall objective of retaining at least one copy of each allele with preassigned probability. It considers sampling from a large heterogeneous 2k-ploid population under a broad range of mating systems leading to a general formula applicable to a fairly large number of populations. It is found that the sample size decreases as ploidy levels increase, but increases with the increase in inbreeding. Under exclusive selfing the sample size is the same, irrespective of the ploidy level, when other parameters are held constant. Minimum sample sizes obtained for diploids by this general formula agree with those already reported by earlier workers. The model confirms the conservative characteristics of genetic variability of polysomic inheritance under chromosomal segregation.  相似文献   

18.
Chi‐squared test has been a popular approach to the analysis of a 2 × 2 table when the sample sizes for the four cells are large. When the large sample assumption does not hold, however, we need an exact testing method such as Fisher's test. When the study population is heterogeneous, we often partition the subjects into multiple strata, so that each stratum consists of homogeneous subjects and hence the stratified analysis has an improved testing power. While Mantel–Haenszel test has been widely used as an extension of the chi‐squared test to test on stratified 2 × 2 tables with a large‐sample approximation, we have been lacking an extension of Fisher's test for stratified exact testing. In this paper, we discuss an exact testing method for stratified 2 × 2 tables that is simplified to the standard Fisher's test in single 2 × 2 table cases, and propose its sample size calculation method that can be useful for designing a study with rare cell frequencies.  相似文献   

19.
There is increasing evidence that occasional utilization area (peripheral sites), in addition to typical utilization area (home range), is important for wildlife conservation and management. Here we estimated the maximum utilization area (MUA), including both typical and occasional utilization areas, based on asymptotic curves of utilization area plotted against sample size. In previous studies, these curves have conventionally been plots of cumulative utilization area versus sample size, but this cumulative method is sensitive to stochastic effects. We propose a new method based on simulation studies where outcomes of replicated simulations are averaged to reduce stochastic effects. In this averaged method, possible combinations of sample size with the same number of location data replicated from a dataset were averaged and applied to the curves of utilization area. The cumulative method resulted in a large variation of MUA estimates, depending on the start date as well as total sample size of the dataset. In the averaged method, MUA estimates were robust against changes in the start date and total sample size. The large variation of MUA estimates arose because location data on any day including the start date are affected by unpredictable effects associated with animal activity and environmental conditions. In the averaged method, replicates of sample size resulted in a reduction of temporal stochasticity, suggesting that the method stably provides reliable estimates for MUA.  相似文献   

20.
A possible association of long Y chromosomes and fetal loss   总被引:5,自引:0,他引:5  
Summary Long Y chromosomes in a relatively unbiased large sample of newborn infants were measured. The proportion of prior abortions was increased twofold in mothers of long Y infants compared to the control in the Caucasian sample. Our results indicate that an increased length of Y chromosome may be an important cause of fetal loss.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号