首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Random regression models are widely used in the field of animal breeding for the genetic evaluation of daily milk yields from different test days. These models are capable of handling different environmental effects on the respective test day, and they describe the characteristics of the course of the lactation period by using suitable covariates with fixed and random regression coefficients. As the numerically expensive estimation of parameters is already part of advanced computer software, modifications of random regression models will considerably grow in importance for statistical evaluations of nutrition and behaviour experiments with animals. Random regression models belong to the large class of linear mixed models. Thus, when choosing a model, or more precisely, when selecting a suitable covariance structure of the random effects, the information criteria of Akaike and Schwarz can be used. In this study, the fitting of random regression models for a statistical analysis of a feeding experiment with dairy cows is illustrated under application of the program package SAS. For each of the feeding groups, lactation curves modelled by covariates with fixed regression coefficients are estimated simultaneously. With the help of the fixed regression coefficients, differences between the groups are estimated and then tested for significance. The covariance structure of the random and subject-specific effects and the serial correlation matrix are selected by using information criteria and by estimating correlations between repeated measurements. For the verification of the selected model and the alternative models, mean values and standard deviations estimated with ordinary least square residuals are used.  相似文献   

2.
A sensory threshold can be defined generally as a stimulus intensity that produces a response in half of the trials. The definition of the population threshold is discussed. Five main classical statistical procedures for estimating thresholds are reviewed. They are the probit, the logistic, the Spearman-Karber, the moving average and the up-and-down procedures. Some new developments in statistical methods for estimating thresholds are outlined. The newly developed methods include the generalized probit and logistic models, the model based on the Beta-Binomial distribution, the trimmed Spearman-Karber method, the kernel method and the sigmoidally constrained maximum likelihood estimation method. The authors propose a new procedure based on the Beta-Binomial distribution for estimating population threshold.  相似文献   

3.
Abstract

Random regression models are widely used in the field of animal breeding for the genetic evaluation of daily milk yields from different test days. These models are capable of handling different environmental effects on the respective test day, and they describe the characteristics of the course of the lactation period by using suitable covariates with fixed and random regression coefficients. As the numerically expensive estimation of parameters is already part of advanced computer software, modifications of random regression models will considerably grow in importance for statistical evaluations of nutrition and behaviour experiments with animals. Random regression models belong to the large class of linear mixed models. Thus, when choosing a model, or more precisely, when selecting a suitable covariance structure of the random effects, the information criteria of Akaike and Schwarz can be used. In this study, the fitting of random regression models for a statistical analysis of a feeding experiment with dairy cows is illustrated under application of the program package SAS. For each of the feeding groups, lactation curves modelled by covariates with fixed regression coefficients are estimated simultaneously. With the help of the fixed regression coefficients, differences between the groups are estimated and then tested for significance. The covariance structure of the random and subject-specific effects and the serial correlation matrix are selected by using information criteria and by estimating correlations between repeated measurements. For the verification of the selected model and the alternative models, mean values and standard deviations estimated with ordinary least square residuals are used.  相似文献   

4.
Propensity-score matching is frequently used in the medical literature to reduce or eliminate the effect of treatment selection bias when estimating the effect of treatments or exposures on outcomes using observational data. In propensity-score matching, pairs of treated and untreated subjects with similar propensity scores are formed. Recent systematic reviews of the use of propensity-score matching found that the large majority of researchers ignore the matched nature of the propensity-score matched sample when estimating the statistical significance of the treatment effect. We conducted a series of Monte Carlo simulations to examine the impact of ignoring the matched nature of the propensity-score matched sample on Type I error rates, coverage of confidence intervals, and variance estimation of the treatment effect. We examined estimating differences in means, relative risks, odds ratios, rate ratios from Poisson models, and hazard ratios from Cox regression models. We demonstrated that accounting for the matched nature of the propensity-score matched sample tended to result in type I error rates that were closer to the advertised level compared to when matching was not incorporated into the analyses. Similarly, accounting for the matched nature of the sample tended to result in confidence intervals with coverage rates that were closer to the nominal level, compared to when matching was not taken into account. Finally, accounting for the matched nature of the sample resulted in estimates of standard error that more closely reflected the sampling variability of the treatment effect compared to when matching was not taken into account.  相似文献   

5.
It is shown that maximum likelihood estimation of variance components from twin data can be parameterized in the framework of linear mixed models. Standard statistical packages can be used to analyze univariate or multivariate data for simple models such as the ACE and CE models. Furthermore, specialized variance component estimation software that can handle pedigree data and user-defined covariance structures can be used to analyze multivariate data for simple and complex models, including those where dominance and/or QTL effects are fitted. The linear mixed model framework is particularly useful for analyzing multiple traits in extended (twin) families with a large number of random effects.  相似文献   

6.
A class of generalized linear mixed models can be obtained by introducing random effects in the linear predictor of a generalized linear model, e.g. a split plot model for binary data or count data. Maximum likelihood estimation, for normally distributed random effects, involves high-dimensional numerical integration, with severe limitations on the number and structure of the additional random effects. An alternative estimation procedure based on an extension of the iterative re-weighted least squares procedure for generalized linear models will be illustrated on a practical data set involving carcass classification of cattle. The data is analysed as overdispersed binomial proportions with fixed and random effects and associated components of variance on the logit scale. Estimates are obtained with standard software for normal data mixed models. Numerical restrictions pertain to the size of matrices to be inverted. This can be dealt with by absorption techniques familiar from e.g. mixed models in animal breeding. The final model fitted to the classification data includes four components of variance and a multiplicative overdispersion factor. Basically the estimation procedure is a combination of iterated least squares procedures and no full distributional assumptions are needed. A simulation study based on the classification data is presented. This includes a study of procedures for constructing confidence intervals and significance tests for fixed effects and components of variance. The simulation results increase confidence in the usefulness of the estimation procedure.  相似文献   

7.
Capture–recapture models for estimating demographic parameters allow covariates to be incorporated to better understand population dynamics. However, high-dimensionality and multicollinearity can hamper estimation and inference. Principal component analysis is incorporated within capture–recapture models and used to reduce the number of predictors into uncorrelated synthetic new variables. Principal components are selected by sequentially assessing their statistical significance. We provide an example on seabird survival to illustrate our approach. Our method requires standard statistical tools, which permits an efficient and easy implementation using standard software.  相似文献   

8.
Lin X  Carroll RJ 《Biometrics》1999,55(2):613-619
In the analysis of clustered data with covariates measured with error, a problem of common interest is to test for correlation within clusters and heterogeneity across clusters. We examined this problem in the framework of generalized linear mixed measurement error models. We propose using the simulation extrapolation (SIMEX) method to construct a score test for the null hypothesis that all variance components are zero. A key feature of this SIMEX score test is that no assumptions need to be made regarding the distributions of the random effects and the unobserved covariates. We illustrate this test by analyzing Framingham heart disease data and evaluate its performance by simulation. We also propose individual SIMEX score tests for testing the variance components separately. Both tests can be easily implemented using existing statistical software.  相似文献   

9.
10.
Summary Analysis of variance and principal components methods have been suggested for estimating repeatability. In this study, six estimation procedures are compared: ANOVA, principal components based on the sample covariance matrix and also on the sample correlation matrix, a related multivariate method (structural analysis) based on the sample covariance matrix and also on the sample correlation matrix, and maximum likelihood estimation. A simulation study indicates that when the standard linear model assumptions are met, the estimators are quite similar except when the repeatability is small. Overall, maximum likelihood appears the preferred method. If the assumption of equal variance is relaxed, the methods based on the sample correlation matrix perform better although others are surprisingly robust. The structural analysis method (with sample correlation matrix) appears to be best.Paper number 776 from the Department of Meat and Animal Science, University of Wisconsin-Madison.  相似文献   

11.
Statistical analysis of diving behavior data collected from satellite-linked dive recorders (SDKs) can be challenging because: (1) the data are binned into several depth and time categories, (2) the data from individual animals are often temporally autocorrelated, (3) random variation between individuals is common, and (4) the number of dives can be correlated among depth bins. Previous analyses often have ignored one or more of these statistical issues. In addition, previous SDR studies have focused on univariate analyses of index variables, rather than multivariate analyses of data from all depth bins. We describe multivariate analysis of SDR data using generalized estimating equations (GEE) and demonstrate the method using SDR data from harbor seals ( Phoca vitulina ) monitored in Prince William Sound, Alaska between 1992 and 1997. Multivariate regression provides greater opportunities for scientific inference than univariate methods, particularly in terms of depth resolution. In addition, empirical variance estimation makes GEE models somewhat easier to implement than other techniques that explicitly model all of the relevant components of variance. However, valid use of empirical variance estimation requires an adequate sample size of individual animals.  相似文献   

12.
Methods for the quantitative estimation of the small coccoid and flagellate Chlorophyta in natural algal communities are outlined. Experiments to demonstrate the precision of these methods are discussed and the components of variance are calculated for several experimental procedures, both for phytoplanktonic and benthic associations. The smallest overall variance for phytoplankton estimations is 27% where a large number of samples were collected and bulked to reduce the sampling error. The precision in counting epipelic algae by this method is only 12%. This result suggests that the inclusion of a short period of ultrasonication may reduce the overall variance in estimation by randomising the distribution of the algae prior to plate inoculation. Close agreement exists between direct counts and plate counts.

It is realised that the cultural approach has limitations, but comparison with results obtained from other methods indicates the advantages of cultural techniques, particularly when estimating small or motile algae.  相似文献   

13.
Horton NJ  Laird NM 《Biometrics》2001,57(1):34-42
This article presents a new method for maximum likelihood estimation of logistic regression models with incomplete covariate data where auxiliary information is available. This auxiliary information is extraneous to the regression model of interest but predictive of the covariate with missing data. Ibrahim (1990, Journal of the American Statistical Association 85, 765-769) provides a general method for estimating generalized linear regression models with missing covariates using the EM algorithm that is easily implemented when there is no auxiliary data. Vach (1997, Statistics in Medicine 16, 57-72) describes how the method can be extended when the outcome and auxiliary data are conditionally independent given the covariates in the model. The method allows the incorporation of auxiliary data without making the conditional independence assumption. We suggest tests of conditional independence and compare the performance of several estimators in an example concerning mental health service utilization in children. Using an artificial dataset, we compare the performance of several estimators when auxiliary data are available.  相似文献   

14.
MIXED MODEL APPROACHES FOR ESTIMATING GENETIC VARIANCES AND COVARIANCES   总被引:62,自引:4,他引:58  
The limitations of methods for analysis of variance(ANOVA)in estimating genetic variances are discussed. Among the three methods(maximum likelihood ML, restricted maximum likelihood REML, and minimum norm quadratic unbiased estimation MINQUE)for mixed linear models, MINQUE method is presented with formulae for estimating variance components and covariances components and for predicting genetic effects. Several genetic models, which cannot be appropriately analyzed by ANOVA methods, are introduced in forms of mixed linear models. Genetic models with independent random effects can be analyzed by MINQUE(1)method whieh is a MINQUE method with all prior values setting 1. MINQUE(1)method can give unbiased estimation for variance components and covariance components, and linear unbiased prediction (LUP) for genetic effects. There are more complicate genetic models for plant seeds which involve correlated random effects. MINQUE(0/1)method, which is a MINQUE method with all prior covariances setting 0 and all prior variances setting 1, is suitable for estimating variance and covariance components in these models. Mixed model approaches have advantage over ANOVA methods for the capacity of analyzing unbalanced data and complicated models. Some problems about estimation and hypothesis test by MINQUE method are discussed.  相似文献   

15.
This paper considers inference methods for case-control logistic regression in longitudinal setups. The motivation is provided by an analysis of plains bison spatial location as a function of habitat heterogeneity. The sampling is done according to a longitudinal matched case-control design in which, at certain time points, exactly one case, the actual location of an animal, is matched to a number of controls, the alternative locations that could have been reached. We develop inference methods for the conditional logistic regression model in this setup, which can be formulated within a generalized estimating equation (GEE) framework. This permits the use of statistical techniques developed for GEE-based inference, such as robust variance estimators and model selection criteria adapted for non-independent data. The performance of the methods is investigated in a simulation study and illustrated with the bison data analysis.  相似文献   

16.
For multicenter randomized trials or multilevel observational studies, the Cox regression model has long been the primary approach to study the effects of covariates on time-to-event outcomes. A critical assumption of the Cox model is the proportionality of the hazard functions for modeled covariates, violations of which can result in ambiguous interpretations of the hazard ratio estimates. To address this issue, the restricted mean survival time (RMST), defined as the mean survival time up to a fixed time in a target population, has been recommended as a model-free target parameter. In this article, we generalize the RMST regression model to clustered data by directly modeling the RMST as a continuous function of restriction times with covariates while properly accounting for within-cluster correlations to achieve valid inference. The proposed method estimates regression coefficients via weighted generalized estimating equations, coupled with a cluster-robust sandwich variance estimator to achieve asymptotically valid inference with a sufficient number of clusters. In small-sample scenarios where a limited number of clusters are available, however, the proposed sandwich variance estimator can exhibit negative bias in capturing the variability of regression coefficient estimates. To overcome this limitation, we further propose and examine bias-corrected sandwich variance estimators to reduce the negative bias of the cluster-robust sandwich variance estimator. We study the finite-sample operating characteristics of proposed methods through simulations and reanalyze two multicenter randomized trials.  相似文献   

17.
18.

Background

Commonly when designing studies, researchers propose to measure several independent variables in a regression model, a subset of which are identified as the main variables of interest while the rest are retained in a model as covariates or confounders. Power for linear regression in this setting can be calculated using SAS PROC POWER. There exists a void in estimating power for the logistic regression models in the same setting.

Methods

Currently, an approach that calculates power for only one variable of interest in the presence of other covariates for logistic regression is in common use and works well for this special case. In this paper we propose three related algorithms along with corresponding SAS macros that extend power estimation for one or more primary variables of interest in the presence of some confounders.

Results

The three proposed empirical algorithms employ likelihood ratio test to provide a user with either a power estimate for a given sample size, a quick sample size estimate for a given power, and an approximate power curve for a range of sample sizes. A user can specify odds ratios for a combination of binary, uniform and standard normal independent variables of interest, and or remaining covariates/confounders in the model, along with a correlation between variables.

Conclusions

These user friendly algorithms and macro tools are a promising solution that can fill the void for estimation of power for logistic regression when multiple independent variables are of interest, in the presence of additional covariates in the model.
  相似文献   

19.
Accurate estimates of population parameters are vital for estimating extinction risk. Such parameters, however, are typically not available for threatened populations. We used a recently developed software tool based on Markov Chain Monte Carlo methods for carrying out Bayesian inference (the BUGS package) to estimate four demographic parameters; the intrinsic growth rate, the strength of density dependence, and the demographic and environmental variance, in three species of small temperate passerines from two sets of time series data taken from a dipper and a song sparrow population, and from previously obtained frequentist estimates of the same parameters in the great tit. By simultaneously modeling variation in these demographic parameters across species and using the resulting distributions as priors in the estimation for individual species, we improve the estimates for each individual species. This framework also allows us to make probabilistic statements about plausible parameter values for small passerines temperate birds in general which is often critically needed in management of species for which little or no data are available. We also discuss how our work relates to recently developed theory on dynamic stochastic population models, and finally note some important differences between frequentist and Bayesian methods.  相似文献   

20.
We assessed complementary log–log (CLL) regression as an alternative statistical model for estimating multivariable‐adjusted prevalence ratios (PR) and their confidence intervals. Using the delta method, we derived an expression for approximating the variance of the PR estimated using CLL regression. Then, using simulated data, we examined the performance of CLL regression in terms of the accuracy of the PR estimates, the width of the confidence intervals, and the empirical coverage probability, and compared it with results obtained from log–binomial regression and stratified Mantel–Haenszel analysis. Within the range of values of our simulated data, CLL regression performed well, with only slight bias of point estimates of the PR and good confidence interval coverage. In addition, and importantly, the computational algorithm did not have the convergence problems occasionally exhibited by log–binomial regression. The technique is easy to implement in SAS (SAS Institute, Cary, NC), and it does not have the theoretical and practical issues associated with competing approaches. CLL regression is an alternative method of binomial regression that warrants further assessment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号