首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
Shoukri MM  Asyali MH  Walter SD 《Biometrics》2003,59(4):1107-1112
Reliability of continuous and dichotomous responses is usually assessed by means of the intraclass correlation coefficient (ICC). We derive the optimal allocation of the number of subjects k and the number of repeated measurements n that minimize the variance of the estimated ICC. Cost constraints are discussed for the case of normally distributed responses. Tables showing optimal choices of k and n are given, along with guidelines for the design of reliability studies in light of our results and those reported by others.  相似文献   

2.
MOTIVATION: Time-course microarray experiments are designed to study biological processes in a temporal fashion. Longitudinal gene expression data arise when biological samples taken from the same subject at different time points are used to measure the gene expression levels. It has been observed that the gene expression patterns of samples of a given tumor measured at different time points are likely to be much more similar to each other than are the expression patterns of tumor samples of the same type taken from different subjects. In statistics, this phenomenon is called the within-subject correlation of repeated measurements on the same subject, and the resulting data are called longitudinal data. It is well known in other applications that valid statistical analyses have to appropriately take account of the possible within-subject correlation in longitudinal data. RESULTS: We apply estimating equation techniques to construct a robust statistic, which is a variant of the robust Wald statistic and accounts for the potential within-subject correlation of longitudinal gene expression data, to detect genes with temporal changes in expression. We associate significance levels to the proposed statistic by either incorporating the idea of the significance analysis of microarrays method or using the mixture model method to identify significant genes. The utility of the statistic is demonstrated by applying it to an important study of osteoblast lineage-specific differentiation. Using simulated data, we also show pitfalls in drawing statistical inference when the within-subject correlation in longitudinal gene expression data is ignored.  相似文献   

3.
We compare several nonparametric and parametric weighting methods for the adjustment of the effect of strata. In particular, we focus on the adjustment methods in the context of receiver‐operating characteristic (ROC) analysis. Nonparametrically, rank‐based van Elteren's test and inverse‐variance (IV) weighting using the area under the ROC curve (AUC) are examined. Parametrically, the stratified t‐test and IV AUC weighted method are applied based on a binormal monotone transformation model. Stratum‐specific, pooled, and adjusted estimates are obtained. The pooled and adjusted AUCs are estimated. We illustrate and compare these weighting methods on a multi‐center diagnostic trial and through extensive Monte‐Carlo simulations.  相似文献   

4.
We propose a likelihood-based model for correlated count data that display under- or overdispersion within units (e.g. subjects). The model is capable of handling correlation due to clustering and/or serial correlation, in the presence of unbalanced, missing or unequally spaced data. A family of distributions based on birth-event processes is used to model within-subject underdispersion. A computational approach is given to overcome a parameterization difficulty with this family, and this allows use of common Markov Chain Monte Carlo software (e.g. WinBUGS) for estimation. Application of the model to daily counts of asthma inhaler use by children shows substantial within-subject underdispersion, between-subject heterogeneity and correlation due to both clustering of measurements within subjects and serial correlation of longitudinal measurements. The model provides a major improvement over Poisson longitudinal models, and diagnostics show that the model fits well.  相似文献   

5.
In the Haseman-Elston approach the squared phenotypic difference is regressed on the proportion of alleles shared identical by descent (IBD) to map a quantitative trait to a genetic marker. In applications the IBD distribution is estimated and usually cannot be determined uniquely owing to incomplete marker information. At Genetic Analysis Workshop (GAW) 13, Jacobs et al. [BMC Genet 2003, 4(Suppl 1):S82] proposed to improve the power of the Haseman-Elston algorithm by weighting for information available from marker genotypes. The authors did not show, however, the validity of the employed asymptotic distribution. In this paper, we use the simulated data provided for GAW 14 and show that weighting Haseman-Elston by marker information results in increased type I error rates. Specifically, we demonstrate that the number of significant findings throughout the chromosome is significantly increased with weighting schemes. Furthermore, we show that the classical Haseman-Elston method keeps its nominal significance level when applied to the same data. We therefore recommend to use Haseman-Elston with marker informativity weights only in conjunction with empirical p-values. Whether this approach in fact yields an increase in power needs to be investigated further.  相似文献   

6.
Summary .   For longitudinal data, mixed models include random subject effects to indicate how subjects influence their responses over repeated assessments. The error variance and the variance of the random effects are usually considered to be homogeneous. These variance terms characterize the within-subjects (i.e., error variance) and between-subjects (i.e., random-effects variance) variation in the data. In studies using ecological momentary assessment (EMA), up to 30 or 40 observations are often obtained for each subject, and interest frequently centers around changes in the variances, both within and between subjects. In this article, we focus on an adolescent smoking study using EMA where interest is on characterizing changes in mood variation. We describe how covariates can influence the mood variances, and also extend the standard mixed model by adding a subject-level random effect to the within-subject variance specification. This permits subjects to have influence on the mean, or location, and variability, or (square of the) scale, of their mood responses. Additionally, we allow the location and scale random effects to be correlated. These mixed-effects location scale models have useful applications in many research areas where interest centers on the joint modeling of the mean and variance structure.  相似文献   

7.
Haseman and Elston (H-E) proposed a robust test to detect linkage between a quantitative trait and a genetic marker. In their method the squared sib-pair trait difference is regressed on the estimated proportion of alleles at a locus shared identical by descent by sib pairs. This method has recently been improved by changing the dependent variable from the squared difference to the mean-corrected product of the sib-pair trait values, a significantly positive regression indicating linkage. Because situations arise in which the original test is more powerful, a further improvement of the H-E method occurs when the dependent variable is changed to a weighted average of the squared sib-pair trait difference and the squared sib-pair mean-corrected trait sum. Here we propose an optimal method of performing this weighting for larger sibships, allowing for the correlation between pairs within a sibship. The optimal weights are inversely proportional to the residual variances obtained from the two different regressions based on the squared sib-pair trait differences and the squared sib-pair mean-corrected trait sums, respectively, allowing for correlations among sib pairs. The proposed method is compared with the existing extension of the H-E approach for larger sibships. Control of the type I error probabilities for sibships of any size can be improved by using a generalized estimating equation approach and the robust sandwich estimate of the variance, or a Monte-Carlo permutation test.  相似文献   

8.
M Palta  T J Yao 《Biometrics》1991,47(4):1355-1369
Confounding in longitudinal or clustered data creates special problems and opportunities because the relationship between the confounder and covariate of interest may differ across and within individuals or clusters. A well-known example of such confounding in longitudinal data is the presence of cohort and period effects in models of aging in epidemiologic research. We first formulate a data-generating model with confounding and derive the distribution of the response variable unconditional on the confounder. We then examine the properties of the regression coefficient for some analytic approaches when the confounder is omitted from the fitted model. The expected value of the regression coefficient differs in across- and within-individual regression. In the multivariate case, within- and between-individual information is combined and weighted according to the assumed covariance structure. We assume compound symmetry in the fitted covariance matrix and derive the variance, bias, and mean squared error of the slope estimate as a function of the fitted within-individual correlation. We find that even in this simplest multivariate case, the trade-off between bias and variance depends on a large number of parameters. It is generally preferable to fit correlations somewhat above the true correlation to minimize the effect of between-individual confounders or cohort effects. Period effects can lead to situations where it is advantageous to fit correlations that are below the true correlation. The results highlight the trade-offs inherent in the choice of method for analysis of longitudinal data, and show that an appropriate choice can be made only after determining whether within- or between-individual confounding is the major concern.  相似文献   

9.
10.
Many aspects of human motor behavior can be understood using optimality principles such as optimal feedback control. However, these proposed optimal control models are risk-neutral; that is, they are indifferent to the variability of the movement cost. Here, we propose the use of a risk-sensitive optimal controller that incorporates movement cost variance either as an added cost (risk-averse controller) or as an added value (risk-seeking controller) to model human motor behavior in the face of uncertainty. We use a sensorimotor task to test the hypothesis that subjects are risk-sensitive. Subjects controlled a virtual ball undergoing Brownian motion towards a target. Subjects were required to minimize an explicit cost, in points, that was a combination of the final positional error of the ball and the integrated control cost. By testing subjects on different levels of Brownian motion noise and relative weighting of the position and control cost, we could distinguish between risk-sensitive and risk-neutral control. We show that subjects change their movement strategy pessimistically in the face of increased uncertainty in accord with the predictions of a risk-averse optimal controller. Our results suggest that risk-sensitivity is a fundamental attribute that needs to be incorporated into optimal feedback control models.  相似文献   

11.
Marginalized models (Heagerty, 1999, Biometrics 55, 688-698) permit likelihood-based inference when interest lies in marginal regression models for longitudinal binary response data. Two such models are the marginalized transition and marginalized latent variable models. The former captures within-subject serial dependence among repeated measurements with transition model terms while the latter assumes exchangeable or nondiminishing response dependence using random intercepts. In this article, we extend the class of marginalized models by proposing a single unifying model that describes both serial and long-range dependence. This model will be particularly useful in longitudinal analyses with a moderate to large number of repeated measurements per subject, where both serial and exchangeable forms of response correlation can be identified. We describe maximum likelihood and Bayesian approaches toward parameter estimation and inference, and we study the large sample operating characteristics under two types of dependence model misspecification. Data from the Madras Longitudinal Schizophrenia Study (Thara et al., 1994, Acta Psychiatrica Scandinavica 90, 329-336) are analyzed.  相似文献   

12.
Summary In medical research, the receiver operating characteristic (ROC) curves can be used to evaluate the performance of biomarkers for diagnosing diseases or predicting the risk of developing a disease in the future. The area under the ROC curve (ROC AUC), as a summary measure of ROC curves, is widely utilized, especially when comparing multiple ROC curves. In observational studies, the estimation of the AUC is often complicated by the presence of missing biomarker values, which means that the existing estimators of the AUC are potentially biased. In this article, we develop robust statistical methods for estimating the ROC AUC and the proposed methods use information from auxiliary variables that are potentially predictive of the missingness of the biomarkers or the missing biomarker values. We are particularly interested in auxiliary variables that are predictive of the missing biomarker values. In the case of missing at random (MAR), that is, missingness of biomarker values only depends on the observed data, our estimators have the attractive feature of being consistent if one correctly specifies, conditional on auxiliary variables and disease status, either the model for the probabilities of being missing or the model for the biomarker values. In the case of missing not at random (MNAR), that is, missingness may depend on the unobserved biomarker values, we propose a sensitivity analysis to assess the impact of MNAR on the estimation of the ROC AUC. The asymptotic properties of the proposed estimators are studied and their finite‐sample behaviors are evaluated in simulation studies. The methods are further illustrated using data from a study of maternal depression during pregnancy.  相似文献   

13.
Chen H  Wang Y 《Biometrics》2011,67(3):861-870
In this article, we propose penalized spline (P-spline)-based methods for functional mixed effects models with varying coefficients. We decompose longitudinal outcomes as a sum of several terms: a population mean function, covariates with time-varying coefficients, functional subject-specific random effects, and residual measurement error processes. Using P-splines, we propose nonparametric estimation of the population mean function, varying coefficient, random subject-specific curves, and the associated covariance function that represents between-subject variation and the variance function of the residual measurement errors which represents within-subject variation. Proposed methods offer flexible estimation of both the population- and subject-level curves. In addition, decomposing variability of the outcomes as a between- and within-subject source is useful in identifying the dominant variance component therefore optimally model a covariance function. We use a likelihood-based method to select multiple smoothing parameters. Furthermore, we study the asymptotics of the baseline P-spline estimator with longitudinal data. We conduct simulation studies to investigate performance of the proposed methods. The benefit of the between- and within-subject covariance decomposition is illustrated through an analysis of Berkeley growth data, where we identified clearly distinct patterns of the between- and within-subject covariance functions of children's heights. We also apply the proposed methods to estimate the effect of antihypertensive treatment from the Framingham Heart Study data.  相似文献   

14.

Purpose

The main goal of any life cycle assessment (LCA) study is to identify solutions leading to environmental savings. In conventional LCA studies, practitioners select from some alternatives the one which better matches their preferences. This task is sometimes simplified by ranking these alternatives using an aggregated indicator defined by attaching weights to impacts. We address here the inverse problem. That is, given an alternative, we aim to determine the weights for which that solution becomes optimal.

Methods

We propose a method based on linear programming (LP) that determines, for a given alternative, the ranges within which the weights attached to a set of impact metrics must lie so that when a weighting combination of these impacts is optimized, the alternative can be optimal, while if the weights fall outside this range, it is guaranteed that the solution will be suboptimal. A large weight value implies that the corresponding LCA impact is given more importance, while a low value implies the converse. Furthermore, we provide a rigorous mathematical analysis on the implications of using weighting schemes in LCA, showing that this practice guides decision-making towards the adoption of some specific alternatives (those lying on the convex envelope of the resulting trade-off curve).

Results and discussion

A case study based on the design of hydrogen infrastructures is taken as a test bed to illustrate the capabilities of the approach presented. Given are a set of production and storage technologies available to produce and deliver hydrogen, a final demand, and cost and environmental data. A set of designs, each achieving a unique combination of cost and LCA impact, is considered. For each of them, we calculate the minimum and maximum weight to be given to every LCA impact so that the alternative can be optimal among all the candidate designs. Numerical results show that solutions with lower impact are selected when decision makers are willing to pay larger monetary penalties for the environmental damage caused.

Conclusions

LP can be used in LCA to translate the decision makers’ preferences into weights. This information is rather valuable, particularly when these weights represent economic penalties, as it allows screening and ranking alternatives on the basis of a common economic basis. Our framework is aimed at facilitating decision making in LCA studies and defines a general framework for comparing alternatives that show different performance in a wide variety of impact metrics.  相似文献   

15.
Several extensions to implied weighting, recently implemented in TNT, allow a better treatment of data sets combining morphological and molecular data sets, as well as those comprising large numbers of missing entries (e.g. palaeontological matrices, or combined matrices with some genes sequenced for few taxa). As there have been recent suggestions that molecular matrices may be better analysed using equal weights (rather than implied weighting), a simple way to apply implied weighting to only some characters (e.g. morphology), leaving other characters with a constant weight (e.g. molecules), is proposed. The new methods also allow weighting entire partitions according to their average homoplasy, giving each of the characters in the partition the same weight (this can be used for dynamically weighting, e.g. entire genes, or first, second, and third positions collectively). Such an approach is easily implemented in schemes like successive weighting, but in the case of implied weighting poses some particular problems. The approach has the peculiar implication that the inclusion of uninformative characters influences the results (by influencing the implied weights for the partitions). Last, the concern that characters with many missing entries may receive artificially inflated weights (because they necessarily display less homoplasy) can be solved by allowing the use of different weighting functions for different characters, in such a way that the cost of additional transformations decreases more rapidly for characters with more missing entries (thus effectively assuming that the unobserved entries are likely to also display some unobserved homoplasy). The conceptual and practical aspects of all these problems, as well as details of the implementation in TNT, are discussed.  相似文献   

16.
The classification accuracy of new diagnostic tests is based on receiver operating characteristic (ROC) curves. The area under the ROC curve (AUC) is one of the well-accepted summary measures for describing the accuracy of diagnostic tests. The AUC summary measure can vary by patient and testing characteristics. Thus, the performance of the test may be different in certain subpopulation of patients and readers. For this purpose, we propose a direct semi-parametric regression model for the non-parametric AUC measure for ordinal data while accounting for discrete and continuous covariates. The proposed method can be used to estimate the AUC value under degenerate data where certain rating categories are not observed. We will discuss the non-standard asymptotic theory, since the estimating functions were based on cross-correlated random variables. Simulation studies based on different classification models showed that the proposed model worked reasonably well with small percent bias and percent mean-squared error. The proposed method was applied to the prostate cancer study to estimate the AUC for four readers, and the carotid vessel study with age, gender, history of previous stroke, and total number of risk factors as covariates, to estimate the accuracy of the diagnostic test in the presence of subject-level covariates.  相似文献   

17.
In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.  相似文献   

18.
In the analysis of longitudinal data, before assuming a parametric model, an idea of the shape of the variance and correlation functions for both the genetic and environmental parts should be known. When a small number of observations is available for each subject at a fixed set of times, it is possible to estimate unstructured covariance matrices, but not when the number of observations over time is large and when individuals are not measured at all times. The non-parametric approach, based on the variogram, presented by Diggle & Verbyla (1998), is specially adapted for exploratory analysis of such data. This paper presents a generalization of their approach to genetic analyses. The methodology is applied to daily records for milk production in dairy cattle and data on age-specific fertility in Drosophila.  相似文献   

19.
To quantify the ability of a marker to predict the onset of a clinical outcome in the future, time‐dependent estimators of sensitivity, specificity, and ROC curve have been proposed accounting for censoring of the outcome. In this paper, we review these estimators, recall their assumptions about the censoring mechanism and highlight their relationships and properties. A simulation study shows that marker‐dependent censoring can lead to important biases for the ROC estimators not adapted to this case. A slight modification of the inverse probability of censoring weighting estimators proposed by Uno et al. (2007) and Hung and Chiang (2010a) performs as well as the nearest neighbor estimator of Heagerty et al. (2000) in the simulation study and has interesting practical properties. Finally, the estimators were used to evaluate abilities of a marker combining age and a cognitive test to predict dementia in the elderly. Data were obtained from the French PAQUID cohort. The censoring appears clearly marker‐dependent leading to appreciable differences between ROC curves estimated with the different methods.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号