首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Haas PJ  Liu Y  Stokes L 《Biometrics》2006,62(1):135-141
We consider the problem of estimating the number of distinct species S in a study area from the recorded presence or absence of species in each of a sample of quadrats. A generalized jackknife estimator of S is derived, along with an estimate of its variance. It is compared with the jackknife estimator for S proposed by Heltshe and Forrester and the empirical Bayes estimator of Mingoti and Meeden. We show that the empirical Bayes estimator has the form of a generalized jackknife estimator under a specific model for species distribution. We compare the new estimators of S to the empirical Bayes estimator via simulation. We characterize circumstances under which each is superior.  相似文献   

2.
In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.  相似文献   

3.
Many variables of interest in agricultural or economical surveys have skewed distributions and can equal zero. Our data are measures of sheet and rill erosion called Revised Universal Soil Loss Equation - 2 (RUSLE2). Small area estimates of mean RUSLE2 erosion are of interest. We use a zero-inflated lognormal mixed effects model for small area estimation. The model combines a unit-level lognormal model for the positive RUSLE2 responses with a unit-level logistic mixed effects model for the binary indicator that the response is nonzero. In the Conservation Effects Assessment Project (CEAP) data, counties with a higher probability of nonzero responses also tend to have a higher mean among the positive RUSLE2 values. We capture this property of the data through an assumption that the pair of random effects for a county are correlated. We develop empirical Bayes (EB) small area predictors and a bootstrap estimator of the mean squared error (MSE). In simulations, the proposed predictor is superior to simpler alternatives. We then apply the method to construct EB predictors of mean RUSLE2 erosion for South Dakota counties. To obtain auxiliary variables for the population of cropland in South Dakota, we integrate a satellite-derived land cover map with a geographic database of soil properties. We provide an R Shiny application called viscover (available at https://lyux.shinyapps.io/viscover/ ) to visualize the overlay operations required to construct the covariates. On the basis of bootstrap estimates of the mean square error, we conclude that the EB predictors of mean RUSLE2 erosion are superior to direct estimators.  相似文献   

4.
D Clayton  J Kaldor 《Biometrics》1987,43(3):671-681
There have been many attempts in recent years to map incidence and mortality from diseases such as cancer. Such maps usually display either relative rates in each district, as measured by a standardized mortality ratio (SMR) or some similar index, or the statistical significance level for a test of the difference between the rates in that district and elsewhere. Neither of these approaches is fully satisfactory and we propose a new approach using empirical Bayes estimation. The resulting estimators represent a weighted compromise between the SMR, the overall mean relative rate, and a local mean of the relative rate in nearby areas. The compromise solution depends on the reliability of each individual SMR and on estimates of the overall amount of dispersion of relative rates over different districts.  相似文献   

5.
Micro-array technology allows investigators the opportunity to measure expression levels of thousands of genes simultaneously. However, investigators are also faced with the challenge of simultaneous estimation of gene expression differences for thousands of genes with very small sample sizes. Traditional estimators of differences between treatment means (ordinary least squares estimators or OLS) are not the best estimators if interest is in estimation of gene expression differences for an ensemble of genes. In the case that gene expression differences are regarded as exchangeable samples from a common population, estimators are available that result in much smaller average mean-square error across the population of gene expression difference estimates. We have simulated the application of such an estimator, namely an empirical Bayes (EB) estimator of random effects in a hierarchical linear model (normal-normal). Simulation results revealed mean-square error as low as 0.05 times the mean-square error of OLS estimators (i.e., the difference between treatment means). We applied the analysis to an example dataset as a demonstration of the shrinkage of EB estimators and of the reduction in mean-square error, i.e., increase in precision, associated with EB estimators in this analysis. The method described here is available in software that is available at .  相似文献   

6.
As larger, more complex data sets are being used to infer phylogenies, accuracy of these phylogenies increasingly requires models of evolution that accommodate heterogeneity in the processes of molecular evolution. We investigated the effect of improper data partitioning on phylogenetic accuracy, as well as the type I error rate and sensitivity of Bayes factors, a commonly used method for choosing among different partitioning strategies in Bayesian analyses. We also used Bayes factors to test empirical data for the need to divide data in a manner that has no expected biological meaning. Posterior probability estimates are misleading when an incorrect partitioning strategy is assumed. The error was greatest when the assumed model was underpartitioned. These results suggest that model partitioning is important for large data sets. Bayes factors performed well, giving a 5% type I error rate, which is remarkably consistent with standard frequentist hypothesis tests. The sensitivity of Bayes factors was found to be quite high when the across-class model heterogeneity reflected that of empirical data. These results suggest that Bayes factors represent a robust method of choosing among partitioning strategies. Lastly, results of tests for the inclusion of unexpected divisions in empirical data mirrored the simulation results, although the outcome of such tests is highly dependent on accounting for rate variation among classes. We conclude by discussing other approaches for partitioning data, as well as other applications of Bayes factors.  相似文献   

7.
Kitada S  Kitakado T  Kishino H 《Genetics》2007,177(2):861-873
Populations often have very complex hierarchical structure. Therefore, it is crucial in genetic monitoring and conservation biology to have a reliable estimate of the pattern of population subdivision. F(ST)'s for pairs of sampled localities or subpopulations are crucial statistics for the exploratory analysis of population structures, such as cluster analysis and multidimensional scaling. However, the estimation of F(ST) is not precise enough to reliably estimate the population structure and the extent of heterogeneity. This article proposes an empirical Bayes procedure to estimate locus-specific pairwise F(ST)'s. The posterior mean of the pairwise F(ST) can be interpreted as a shrinkage estimator, which reduces the variance of conventional estimators largely at the expense of a small bias. The global F(ST) of a population generally varies among loci in the genome. Our maximum-likelihood estimates of global F(ST)'s can be used as sufficient statistics to estimate the distribution of F(ST) in the genome. We demonstrate the efficacy and robustness of our model by simulation and by an analysis of the microsatellite allele frequencies of the Pacific herring. The heterogeneity of the global F(ST) in the genome is discussed on the basis of the estimated distribution of the global F(ST) for the herring and examples of human single nucleotide polymorphisms (SNPs).  相似文献   

8.

Background  

The small sample sizes often used for microarray experiments result in poor estimates of variance if each gene is considered independently. Yet accurately estimating variability of gene expression measurements in microarray experiments is essential for correctly identifying differentially expressed genes. Several recently developed methods for testing differential expression of genes utilize hierarchical Bayesian models to "pool" information from multiple genes. We have developed a statistical testing procedure that further improves upon current methods by incorporating the well-documented relationship between the absolute gene expression level and the variance of gene expression measurements into the general empirical Bayes framework.  相似文献   

9.
Let us consider a general population π. Each object belonging to the population π is characterized by a pair of correlated random vectors (X, Y ). Both X and Y may be mixtures of discrete and continuous random variables. It will be assumed that our population π consists of k groups π1,….,πk, which depend on the value of the random vector Y. A certain object, which is an element of one of the k groups π1, …, πk, has to be classified into the correct group. The knowledge of the value of the random vector Y would permit its correct classification, but the observation of this vector is difficult or dangerous and we must assign the individual on the basis of the observation of the random vector X . The classification procedure is based on randomized decision function δ* which minimizes the risk function i.e. Bayes decision function. We give also two empirical Bayes classification rules i.e. decision functions based on the sample from population π and having property that their risks converge to Bayes risk when the sample size increases.  相似文献   

10.
The currently practiced methods of significance testing in microarray gene expression profiling are highly unstable and tend to be very low in power. These undesirable properties are due to the nature of multiple testing procedures, as well as extremely strong and long-ranged correlations between gene expression levels. In an earlier publication, we identified a special structure in gene expression data that produces a sequence of weakly dependent random variables. This structure, termed the delta-sequence, lies at the heart of a new methodology for selecting differentially expressed genes in nonoverlapping gene pairs. The proposed method has two distinct advantages: (1) it leads to dramatic gains in terms of the mean numbers of true and false discoveries, and in the stability of the results of testing; and (2) its outcomes are entirely free from the log-additive array-specific technical noise. We demonstrate the usefulness of this approach in conjunction with the nonparametric empirical Bayes method. The proposed modification of the empirical Bayes method leads to significant improvements in its performance. The new paradigm arising from the existence of the delta-sequence in biological data offers considerable scope for future developments in this area of methodological research.  相似文献   

11.
H. W. Deng  M. Lynch 《Genetics》1996,144(1):349-360
The rate and average effects of spontaneous deleterious mutations are important determinants of the evolution of breeding systems and of the vulnerability of small populations to extinction. Nevertheless, few attempts have been made to estimate the properties of such mutations, and those studies that have been performed have been extremely labor intensive, relying on long-term, laboratory mutation-accumulation experiments. We present an alternative to the latter approach. For populations in which the genetic variance for fitness is a consequence of selection-mutation balance, the mean fitness and genetic variance of fitness in outbred and inbred generations can be expressed as simple functions of the genomic mutation rate, average homozygous effect and average dominance coefficient of new mutations. Using empirical estimates for the mean and genetic variance of fitness, these expressions can then be solved to obtain joint estimates of the deleterious-mutation parameters. We employ computer simulations to evaluate the degree of bias of the estimators and present some general recommendations on the application of the technique. Our procedures provide some hope for obtaining estimates of the properties of deleterious mutations from a wide phylogenetic range of species as well as a mechanism for testing the validity of alternative models for the maintenance of genetic variance for fitness.  相似文献   

12.
Affinity capture of DNA methylation combined with high-throughput sequencing strikes a good balance between the high cost of whole genome bisulfite sequencing and the low coverage of methylation arrays. We present BayMeth, an empirical Bayes approach that uses a fully methylated control sample to transform observed read counts into regional methylation levels. In our model, inefficient capture can readily be distinguished from low methylation levels. BayMeth improves on existing methods, allows explicit modeling of copy number variation, and offers computationally efficient analytical mean and variance estimators. BayMeth is available in the Repitools Bioconductor package.  相似文献   

13.
The current variance estimators for Jukes and Cantor's one-parameter model and Kimura's two-parameter model tend to underestimate the true variances when the true proportion of differences between the two sequences under study is not small. In this paper, we developed improved variance estimators, using a higher-order Taylor expansion and empirical methods. The new estimators outperform the conventional estimators and provide accurate estimates of the true variances.  相似文献   

14.
MOTIVATION: The classification of samples using gene expression profiles is an important application in areas such as cancer research and environmental health studies. However, the classification is usually based on a small number of samples, and each sample is a long vector of thousands of gene expression levels. An important issue in parametric modeling for so many gene expression levels is the control of the number of nuisance parameters in the model. Large models often lead to intensive or even intractable computation, while small models may be inadequate for complex data.Methodology: We propose a two-step empirical Bayes classification method as a solution to this issue. At the first step, we use the model-based cluster algorithm with a non-traditional purpose of assigning gene expression levels to form abundance groups. At the second step, by assuming the same variance for all the genes in the same group, we substantially reduce the number of nuisance parameters in our statistical model. RESULTS: The proposed model is more parsimonious, which leads to efficient computation under an empirical Bayes estimation procedure. We consider two real examples and simulate data using our method. Desired low classification error rates are obtained even when a large number of genes are pre-selected for class prediction.  相似文献   

15.
This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.  相似文献   

16.
Chen DG  Carter EM  Hubert JJ  Kim PT 《Biometrics》1999,55(4):1038-1043
This article presents a new empirical Bayes estimator (EBE) and a shrinkage estimator for determining the relative potency from several multivariate bioassays by incorporating prior information on the model parameters based on Jeffreys' rules. The EBE can account for any extra variability among the bioassays, and if this extra variability is 0, then the EBE reduces to the maximum likelihood estimator for combinations of multivariate bioassays. The shrinkage estimator turns out to be a compromise of the prior information and the estimator from each multivariate bioassay, with the weights depending on the prior variance.  相似文献   

17.
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth''s ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth''s parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth''s parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.  相似文献   

18.
Hierarchical Bayes models for cDNA microarray gene expression   总被引:2,自引:0,他引:2  
cDNA microarrays are used in many contexts to compare mRNA levels between samples of cells. Microarray experiments typically give us expression measurements on 1000-20 000 genes, but with few replicates for each gene. Traditional methods using means and standard deviations to detect differential expression are not satisfactory in this context. A handful of alternative statistics have been developed, including several empirical Bayes methods. In the present paper we present two full hierarchical Bayes models for detecting gene expression, of which one (D) describes our microarray data very well. We also compare the full Bayes and empirical Bayes approaches with respect to model assumptions, false discovery rates and computer running time. The proposed models are compared to existing empirical Bayes models in a simulation study and for a set of data (Yuen et al., 2002), where 27 genes have been categorized by quantitative real-time PCR. It turns out that the existing empirical Bayes methods have at least as good performance as the full Bayes ones.  相似文献   

19.
The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.  相似文献   

20.
Codon-based substitution models have been widely used to identify amino acid sites under positive selection in comparative analysis of protein-coding DNA sequences. The nonsynonymous-synonymous substitution rate ratio (d(N)/d(S), denoted omega) is used as a measure of selective pressure at the protein level, with omega > 1 indicating positive selection. Statistical distributions are used to model the variation in omega among sites, allowing a subset of sites to have omega > 1 while the rest of the sequence may be under purifying selection with omega < 1. An empirical Bayes (EB) approach is then used to calculate posterior probabilities that a site comes from the site class with omega > 1. Current implementations, however, use the naive EB (NEB) approach and fail to account for sampling errors in maximum likelihood estimates of model parameters, such as the proportions and omega ratios for the site classes. In small data sets lacking information, this approach may lead to unreliable posterior probability calculations. In this paper, we develop a Bayes empirical Bayes (BEB) approach to the problem, which assigns a prior to the model parameters and integrates over their uncertainties. We compare the new and old methods on real and simulated data sets. The results suggest that in small data sets the new BEB method does not generate false positives as did the old NEB approach, while in large data sets it retains the good power of the NEB approach for inferring positively selected sites.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号