首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Stochastic search variable selection (SSVS) is a Bayesian variable selection method that employs covariate‐specific discrete indicator variables to select which covariates (e.g., molecular markers) are included in or excluded from the model. We present a new variant of SSVS where, instead of discrete indicator variables, we use continuous‐scale weighting variables (which take also values between zero and one) to select covariates into the model. The improved model performance is shown and compared to standard SSVS using simulated and real quantitative trait locus mapping datasets. The decision making to decide phenotype‐genotype associations in our SSVS variant is based on median of posterior distribution or using Bayes factors. We also show here that by using continuous‐scale weighting variables it is possible to improve mixing properties of Markov chain Monte Carlo sampling substantially compared to standard SSVS. Also, the separation of association signals and nonsignals (control of noise level) seems to be more efficient compared to the standard SSVS. Thus, the novel method provides efficient new framework for SSVS analysis that additionally provides whole posterior distribution for pseudo‐indicators which means more information and may help in decision making.  相似文献   

2.
Statistical models are simple mathematical rules derived from empirical data describing the association between an outcome and several explanatory variables. In a typical modeling situation statistical analysis often involves a large number of potential explanatory variables and frequently only partial subject-matter knowledge is available. Therefore, selecting the most suitable variables for a model in an objective and practical manner is usually a non-trivial task. We briefly revisit the purposeful variable selection procedure suggested by Hosmer and Lemeshow which combines significance and change-in-estimate criteria for variable selection and critically discuss the change-in-estimate criterion. We show that using a significance-based threshold for the change-in-estimate criterion reduces to a simple significance-based selection of variables, as if the change-in-estimate criterion is not considered at all. Various extensions to the purposeful variable selection procedure are suggested. We propose to use backward elimination augmented with a standardized change-in-estimate criterion on the quantity of interest usually reported and interpreted in a model for variable selection. Augmented backward elimination has been implemented in a SAS macro for linear, logistic and Cox proportional hazards regression. The algorithm and its implementation were evaluated by means of a simulation study. Augmented backward elimination tends to select larger models than backward elimination and approximates the unselected model up to negligible differences in point estimates of the regression coefficients. On average, regression coefficients obtained after applying augmented backward elimination were less biased relative to the coefficients of correctly specified models than after backward elimination. In summary, we propose augmented backward elimination as a reproducible variable selection algorithm that gives the analyst more flexibility in adopting model selection to a specific statistical modeling situation.  相似文献   

3.
Xu S 《Biometrics》2007,63(2):513-521
Summary .   The genetic variance of a quantitative trait is often controlled by the segregation of multiple interacting loci. Linear model regression analysis is usually applied to estimating and testing effects of these quantitative trait loci (QTL). Including all the main effects and the effects of interaction (epistatic effects), the dimension of the linear model can be extremely high. Variable selection via stepwise regression or stochastic search variable selection (SSVS) is the common procedure for epistatic effect QTL analysis. These methods are computationally intensive, yet they may not be optimal. The LASSO (least absolute shrinkage and selection operator) method is computationally more efficient than the above methods. As a result, it has been widely used in regression analysis for large models. However, LASSO has never been applied to genetic mapping for epistatic QTL, where the number of model effects is typically many times larger than the sample size. In this study, we developed an empirical Bayes method (E-BAYES) to map epistatic QTL under the mixed model framework. We also tested the feasibility of using LASSO to estimate epistatic effects, examined the fully Bayesian SSVS, and reevaluated the penalized likelihood (PENAL) methods in mapping epistatic QTL. Simulation studies showed that all the above methods performed satisfactorily well. However, E-BAYES appears to outperform all other methods in terms of minimizing the mean-squared error (MSE) with relatively short computing time. Application of the new method to real data was demonstrated using a barley dataset.  相似文献   

4.
Summary The augmentation procedure of G.W. Moore leads to correct estimates of the total number of nucleotide substitutions separating two genes descendent from a common ancestor provided the data base is sufficiently dense. These estimates are in agreement with the true distance values from simulations of known evolutionary pathways. The estimates, on the average, are unbiased: they neither overaugment nor underaugment seriously. The variance of the population of augmented distance values reflects accurately the variance of the population of true distance values and is thus not abnormally large due to procedural defects in the algorithm.The augmented distances are in agreement with stochastic models tested on real data when the latter take proper account of the restricted mutability of codons resulting from natural selection.When the experimental data base is not dense, the augmented distance values and population variance may underestimate both the true distance values and their variance. This has a logical consequence that there exist significant and numerous errors in the ancestral sequences reconstructed by the parsimony principle from such data bases.The restrictions, resulting from natural selection, on the mutability of different nucleotide sites is shown to bear critically on the accuracy of estimates of the total number of nucleotide replacements made by stochastic models.  相似文献   

5.
Mutation spectra recovered from lacI transgenic animals exposed in separate experiments to tris-(2,3-dibromopropyl)phosphate (TDBP) or aflatoxin B1 (AFB1) were examined using log-linear analysis. Log-linear analysis is a categorical procedure that analyses contingency table data. Expected contingency table cell counts are estimated by maximum likelihood as effects of main variables and variable interactions. Evaluation of hierarchical models of decreasing complexity indicates when significant explanatory power is lost by the sequential omission of interactions between variables. Use of this technique allows construction of the most parsimonious models to account for mutation spectra obtained in the two experiments. The resulting statistical models are consistent with previous analyses of these data and with biological explanations for causes of the observed spectra.  相似文献   

6.
Analysis of variance (ANOVA) and log-linear analyses of time-budget data from a study of sloth bear enclosure utilization are compared. Two sampling models that plausibly underlie such data are discussed. Either could lead to an analysis of variance, but only one to a log-linear analysis. Given an appropriate sampling model and appropriate data, there is much to recommend log-linear analysis, despite its unfamiliarity to most animal behaviorists. One need not worry whether distribution assumptions are violated. Moreover, the data analyzed are the data collected, not estimates derived from those data, and thus no power is lost through a data reduction step. No matter what analysis is used, effect size should be taken into consideration. Multiple R2 can be used for ANOVA, but no directly comparable statistic exists for log-linear analyses. One possible candidate for a log-linear R2 analog is discussed here, and appears to give sensible and interpretable results. © 1992 Wiley-Liss Inc.  相似文献   

7.
A log-linear modeling framework for selective mixing.   总被引:1,自引:0,他引:1  
Nonrandom mixing can significantly alter the diffusion path of an infectious disease such as AIDS that requires intimate contact. Recent attempts to model this effect have sought a general framework capable of representing both simple and arbitrarily complicated mixing structures, and of solving the balancing problem in a nonequilibrium multigroup population. Log-linear models are proposed here as a general framework for solving the first problem. This approach offers several additional benefits: The parameters used to govern the mixing have a simple, intuitive interpretation, the framework provides a statistically sound basis for the estimation of these parameters from mixing-matrix data, and the resulting estimates are easily integrated into compartmental models for diffusion. A modified selection model is proposed to solve the second problem of generalizing the selection process to nonequilibrium populations. The distribution of contacts under this model is derived and is found to satisfy the assumptions of statistical inference for log-linear models. Together these techniques provide an integrated and flexible framework for modeling the role of selective mixing in the spread of disease.  相似文献   

8.
Suh YJ  Ye KQ  Mendell NR 《Human heredity》2003,55(2-3):147-152
OBJECTIVES: We apply and evaluate the intrinsic Bayes factor (IBF) of Berger and Pericchi [J Am Stat Assoc 1996;91:109-122; Bayesian Statistics, Oxford University Press, vol 5, 1996] to linkage analyses done using the stochastic search variable selection (SSVS) method of George and McCulloch [J Am Stat Assoc 1993;88:881-889] as proposed by Suh et al. [Genet Epidemiol 2001;21(suppl 1):S706-S711]. METHODS: We consider 20 simulations of linkage data obtained under two different generating models. The SSVS is applied to a multiple regression extension [Genet Epidemiol 2001;21(suppl 1): S706-S711] of the Haseman-Elston [Behav Genet 1972;2:3-19; Genet Epidemiol 2000;19:1-17] methods. Four prior distributions are considered. We apply the IBF criterion to those samples where different prior distributions result in different top models. RESULTS: In those samples where three different models were obtained using the four priors, application of the IBFs eliminated one of the two wrong models in 4 out of 5 situations. Further elimination using the IBF criterion for situations with two different subsets did not serve as well. CONCLUSIONS: When different priors result in three or more different subsets of markers, one can use the IBF to get this number down to two for consideration. When two subsets result we recommend that both be considered.  相似文献   

9.
Summary Clinicians are often interested in the effect of covariates on survival probabilities at prespecified study times. Because different factors can be associated with the risk of short‐ and long‐term failure, a flexible modeling strategy is pursued. Given a set of multiple candidate working models, an objective methodology is proposed that aims to construct consistent and asymptotically normal estimators of regression coefficients and average prediction error for each working model, that are free from the nuisance censoring variable. It requires the conditional distribution of censoring given covariates to be modeled. The model selection strategy uses stepup or stepdown multiple hypothesis testing procedures that control either the proportion of false positives or generalized familywise error rate when comparing models based on estimates of average prediction error. The context can actually be cast as a missing data problem, where augmented inverse probability weighted complete case estimators of regression coefficients and prediction error can be used ( Tsiatis, 2006 , Semiparametric Theory and Missing Data). A simulation study and an interesting analysis of a recent AIDS trial are provided.  相似文献   

10.
Mark-recapture techniques are widely used to estimate the size of wildlife populations. However, in cetacean photo-identification studies, it is often impractical to sample across the entire range of the population. Consequently, negatively biased population estimates can result when large portions of a population are unavailable for photographic capture. To overcome this problem, we propose that individuals be sampled from a number of discrete sites located throughout the population's range. The recapture of individuals between sites can then be presented in a simple contingency table, where the cells refer to discrete categories formed by combinations of the study sites. We present a Bayesian framework for fitting a suite of log-linear models to these data, with each model representing a different hypothesis about dependence between sites. Modeling dependence facilitates the analysis of opportunistic photo-identification data from study sites located due to convenience rather than by design. Because inference about population size is sensitive to model choice, we use Bayesian Markov chain Monte Carlo approaches to estimate posterior model probabilities, and base inference on a model-averaged estimate of population size. We demonstrate this method in the analysis of photographic mark-recapture data for bottlenose dolphins from three coastal sites around NE Scotland.  相似文献   

11.
L A Goodman 《Biometrics》1983,39(1):149-160
To analyse the dependence of a qualitative (dichotomous or polytomous) response variable upon one or more qualitative explanatory variables, log-linear models for frequencies are compared with log-linear models for odds, when the categories of the response variable are ordered and the categories of each explanatory variable may be either ordered or unordered. The log-linear models for odds express the odds (or log odds) pertaining to adjacent response categories in terms of appropriate multiplicative (or additive) factors. These models include the 'null log-odds model', the 'uniform log-odds model', the 'parallel log-odds model', and other log-linear models for the odds. With these models, the dependence of the response variable (with ordered categories) can be analyzed in a manner analogous to the usual multiple regression analysis and related analysis of variance and analysis of covariance. Application of log-linear models for the odds sheds light on earlier applications of log-linear models for the frequencies in contingency tables with ordered categories.  相似文献   

12.
Statistical models support medical research by facilitating individualized outcome prognostication conditional on independent variables or by estimating effects of risk factors adjusted for covariates. Theory of statistical models is well‐established if the set of independent variables to consider is fixed and small. Hence, we can assume that effect estimates are unbiased and the usual methods for confidence interval estimation are valid. In routine work, however, it is not known a priori which covariates should be included in a model, and often we are confronted with the number of candidate variables in the range 10–30. This number is often too large to be considered in a statistical model. We provide an overview of various available variable selection methods that are based on significance or information criteria, penalized likelihood, the change‐in‐estimate criterion, background knowledge, or combinations thereof. These methods were usually developed in the context of a linear regression model and then transferred to more generalized linear models or models for censored survival data. Variable selection, in particular if used in explanatory modeling where effect estimates are of central interest, can compromise stability of a final model, unbiasedness of regression coefficients, and validity of p‐values or confidence intervals. Therefore, we give pragmatic recommendations for the practicing statistician on application of variable selection methods in general (low‐dimensional) modeling problems and on performing stability investigations and inference. We also propose some quantities based on resampling the entire variable selection process to be routinely reported by software packages offering automated variable selection algorithms.  相似文献   

13.
Many approaches for variable selection with multiply imputed data in the development of a prognostic model have been proposed. However, no method prevails as uniformly best. We conducted a simulation study with a binary outcome and a logistic regression model to compare two classes of variable selection methods in the presence of MI data: (I) Model selection on bootstrap data, using backward elimination based on AIC or lasso, and fit the final model based on the most frequently (e.g. ) selected variables over all MI and bootstrap data sets; (II) Model selection on original MI data, using lasso. The final model is obtained by (i) averaging estimates of variables that were selected in any MI data set or (ii) in 50% of the MI data; (iii) performing lasso on the stacked MI data, and (iv) as in (iii) but using individual weights as determined by the fraction of missingness. In all lasso models, we used both the optimal penalty and the 1‐se rule. We considered recalibrating models to correct for overshrinkage due to the suboptimal penalty by refitting the linear predictor or all individual variables. We applied the methods on a real dataset of 951 adult patients with tuberculous meningitis to predict mortality within nine months. Overall, applying lasso selection with the 1‐se penalty shows the best performance, both in approach I and II. Stacking MI data is an attractive approach because it does not require choosing a selection threshold when combining results from separate MI data sets  相似文献   

14.
Algorithmic details to obtain maximum likelihood estimates of parameters on a large phylogeny are discussed. On a large tree, an efficient approach is to optimize branch lengths one at a time while updating parameters in the substitution model simultaneously. Codon substitution models that allow for variable nonsynonymous/synonymous rate ratios (ω=d N/d S) among sites are used to analyze a data set of human influenza virus type A hemagglutinin (HA) genes. The data set has 349 sequences. Methods for obtaining approximate estimates of branch lengths for codon models are explored, and the estimates are used to test for positive selection and to identify sites under selection. Compared with results obtained from the exact method estimating all parameters by maximum likelihood, the approximate methods produced reliable results. The analysis identified a number of sites in the viral gene under diversifying Darwinian selection and demonstrated the importance of including many sequences in the data in detecting positive selection at individual sites. Received: 25 April 2000 / Accepted: 24 July 2000  相似文献   

15.
A heuristic three-step procedure for analysing multidimensional contingency tables is given to meet the requirements of a mixed analysis from both hypotheses-ruled and data-ruled type. The first-step provides the structure of relationships among the attributes by fitting an appropriate unsaturated log-linear model to the data of the given contingency table. Restriction to elementary hierarchical models allows to get them by combining pairs of conditional independence. The result of the first step may be regarded as a certain validisation of real model ideas. In the second step the significant pairs of conditional dependence are analysed in regard to the levels of the condition complex. Only such significant pairs are to be considered, in general, where the condition complex does not include the response variable. The third-step may test special subtests in that significant two-dimensional tables found in step two or may extend the general statements by partitioning, the corresponding test statistics in additive components. Application examples demonstrate the general line of action.  相似文献   

16.
The case-cohort study involves two-phase samplings: simple random sampling from an infinite superpopulation at phase one and stratified random sampling from a finite cohort at phase two. Standard analyses of case-cohort data involve solution of inverse probability weighted (IPW) estimating equations, with weights determined by the known phase two sampling fractions. The variance of parameter estimates in (semi)parametric models, including the Cox model, is the sum of two terms: (i) the model-based variance of the usual estimates that would be calculated if full data were available for the entire cohort; and (ii) the design-based variance from IPW estimation of the unknown cohort total of the efficient influence function (IF) contributions. This second variance component may be reduced by adjusting the sampling weights, either by calibration to known cohort totals of auxiliary variables correlated with the IF contributions or by their estimation using these same auxiliary variables. Both adjustment methods are implemented in the R survey package. We derive the limit laws of coefficients estimated using adjusted weights. The asymptotic results suggest practical methods for construction of auxiliary variables that are evaluated by simulation of case-cohort samples from the National Wilms Tumor Study and by log-linear modeling of case-cohort data from the Atherosclerosis Risk in Communities Study. Although not semiparametric efficient, estimators based on adjusted weights may come close to achieving full efficiency within the class of augmented IPW estimators.  相似文献   

17.
Abstract The Singing-Ground Survey (SGS) is a primary source of information on population change for American woodcock (Scolopax minor). We analyzed the SGS using a hierarchical log-linear model and compared the estimates of change and annual indices of abundance to a route regression analysis of SGS data. We also grouped SGS routes into Bird Conservation Regions (BCRs) and estimated population change and annual indices using BCRs within states and provinces as strata. Based on the hierarchical model–based estimates, we concluded that woodcock populations were declining in North America between 1968 and 2006 (trend = −0.9%/yr, 95% credible interval: −1.2, −0.5). Singing-Ground Survey results are generally similar between analytical approaches, but the hierarchical model has several important advantages over the route regression. Hierarchical models better accommodate changes in survey efficiency over time and space by treating strata, years, and observers as random effects in the context of a log-linear model, providing trend estimates that are derived directly from the annual indices. We also conducted a hierarchical model analysis of woodcock data from the Christmas Bird Count and the North American Breeding Bird Survey. All surveys showed general consistency in patterns of population change, but the SGS had the shortest credible intervals. We suggest that population management and conservation planning for woodcock involving interpretation of the SGS use estimates provided by the hierarchical model.  相似文献   

18.
The simultaneous analysis of multiple genomic loci is a powerful approach to studying the effects of population history and natural selection on patterns of genetic variation of a species. By surveying nucleotide sequence polymorphism at 334 randomly distributed genomic regions in 12 accessions of Arabidopsis thaliana, we examined whether a standard neutral model of nucleotide sequence polymorphism is consistent with observed data. The average nucleotide diversity was 0.0071 for total sites and 0.0083 for silent sites. Although levels of diversity are variable among loci, no correlation with local recombination rate was observed, but polymorphism levels were correlated for physically linked loci (<250 kb). We found that observed distributions of Tajima's D- and D/D(min)- and of Fu and Li's D-, D*- and F-, F*-statistics differed significantly from the expected distributions under a standard neutral model due to an excess of rare polymorphisms and high variances. Observed and expected distributions of Fay and Wu's H were not different, suggesting that demographic processes and not selection at multiple loci are responsible for the deviation from a neutral model. Maximum-likelihood comparisons of alternative demographic models like logistic population growth, glacial refugia, or past bottlenecks did not produce parameter estimates that were more consistent with observed patterns. However, exclusion of highly polymorphic "outlier loci" resulted in a fit to the logistic growth model. Various tests of neutrality revealed a set of candidate loci that may evolve under selection.  相似文献   

19.
In population‐based case‐control studies, it is of great public‐health importance to estimate the disease incidence rates associated with different levels of risk factors. This estimation is complicated by the fact that in such studies the selection probabilities for the cases and controls are unequal. A further complication arises when the subjects who are selected into the study do not participate (i.e. become nonrespondents) and nonrespondents differ systematically from respondents. In this paper, we show how to account for unequal selection probabilities as well as differential nonresponses in the incidence estimation. We use two logistic models, one relating the disease incidence rate to the risk factors, and one modelling the predictors that affect the nonresponse probability. After estimating the regression parameters in the nonresponse model, we estimate the regression parameters in the disease incidence model by a weighted estimating function that weights a respondent's contribution to the likelihood score function by the inverse of the product of his/her selection probability and his/her model‐predicted response probability. The resulting estimators of the regression parameters and the corresponding estimators of the incidence rates are shown to be consistent and asymptotically normal with easily estimated variances. Simulation results demonstrate that the asymptotic approximations are adequate for practical use and that failure to adjust for nonresponses could result in severe biases. An illustration with data from a cardiovascular study that motivated this work is presented.  相似文献   

20.
Roy J  Lin X 《Biometrics》2000,56(4):1047-1054
Multiple outcomes are often used to properly characterize an effect of interest. This paper proposes a latent variable model for the situation where repeated measures over time are obtained on each outcome. These outcomes are assumed to measure an underlying quantity of main interest from different perspectives. We relate the observed outcomes using regression models to a latent variable, which is then modeled as a function of covariates by a separate regression model. Random effects are used to model the correlation due to repeated measures of the observed outcomes and the latent variable. An EM algorithm is developed to obtain maximum likelihood estimates of model parameters. Unit-specific predictions of the latent variables are also calculated. This method is illustrated using data from a national panel study on changes in methadone treatment practices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号