首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.
We consider sample size calculations for testing differences in means between two samples and allowing for different variances in the two groups. Typically, the power functions depend on the sample size and a set of parameters assumed known, and the sample size needed to obtain a prespecified power is calculated. Here, we account for two sources of variability: we allow the sample size in the power function to be a stochastic variable, and we consider estimating the parameters from preliminary data. An example of the first source of variability is nonadherence (noncompliance). We assume that the proportion of subjects who will adhere to their treatment regimen is not known before the study, but that the proportion is a stochastic variable with a known distribution. Under this assumption, we develop simple closed form sample size calculations based on asymptotic normality. The second source of variability is in parameter estimates that are estimated from prior data. For example, we account for variability in estimating the variance of the normal response from existing data which are assumed to have the same variance as the study for which we are calculating the sample size. We show that we can account for the variability of the variance estimate by simply using a slightly larger nominal power in the usual sample size calculation, which we call the calibrated power. We show that the calculation of the calibrated power depends only on the sample size of the existing data, and we give a table of calibrated power by sample size. Further, we consider the calculation of the sample size in the rarer situation where we account for the variability in estimating the standardized effect size from some existing data. This latter situation, as well as several of the previous ones, is motivated by sample size calculations for a Phase II trial of a malaria vaccine candidate.  相似文献   

2.
Internal pilot studies are a popular design feature to address uncertainties in the sample size calculations caused by vague information on nuisance parameters. Despite their popularity, only very recently blinded sample size reestimation procedures for trials with count data were proposed and their properties systematically investigated. Although blinded procedures are favored by regulatory authorities, practical application is somewhat limited by fears that blinded procedures are prone to bias if the treatment effect was misspecified in the planning. Here, we compare unblinded and blinded procedures with respect to bias, error rates, and sample size distribution. We find that both procedures maintain the desired power and that the unblinded procedure is slightly liberal whereas the actual significance level of the blinded procedure is close to the nominal level. Furthermore, we show that in situations where uncertainty about the assumed treatment effect exists, the blinded estimator of the control event rate is biased in contrast to the unblinded estimator, which results in differences in mean sample sizes in favor of the unblinded procedure. However, these differences are rather small compared to the deviations of the mean sample sizes from the sample size required to detect the true, but unknown effect. We demonstrate that the variation of the sample size resulting from the blinded procedure is in many practically relevant situations considerably smaller than the one of the unblinded procedures. The methods are extended to overdispersed counts using a quasi‐likelihood approach and are illustrated by trials in relapsing multiple sclerosis.  相似文献   

3.
Inbreeding and relationship metrics among and within populations are useful measures for genetic management of wild populations, but accuracy and precision of estimates can be influenced by the number of individual genotypes analysed. Biologists are confronted with varied advice regarding the sample size necessary for reliable estimates when using genomic tools. We developed a simulation framework to identify the optimal sample size for three widely used metrics to enable quantification of expected variance and relative bias of estimates and a comparison of results among populations. We applied this approach to analyse empirical genomic data for 30 individuals from each of four different free‐ranging Rocky Mountain bighorn sheep (Ovis canadensis canadensis) populations in Montana and Wyoming, USA, through cross‐species application of an Ovine array and analysis of approximately 14,000 single nucleotide polymorphisms (SNPs) after filtering. We examined intra‐ and interpopulation relationships using kinship and identity by state metrics, as well as FST between populations. By evaluating our simulation results, we concluded that a sample size of 25 was adequate for assessing these metrics using the Ovine array to genotype Rocky Mountain bighorn sheep herds. However, we conclude that a universal sample size rule may not be able to sufficiently address the complexities that impact genomic kinship and inbreeding estimates. Thus, we recommend that a pilot study and sample size simulation using R code we developed that includes empirical genotypes from a subset of populations of interest would be an effective approach to ensure rigour in estimating genomic kinship and population differentiation.  相似文献   

4.

Background

Adherence to medication is low in specific populations who need chronic medication. However, adherence to medication is also of interest in a more general fashion, independent of specific populations or side effects of particular drugs. If clinicians and researchers expect patients to show close to full adherence, it is relevant to know how likely the achievement of this goal is. Population based rates can provide an estimate of efforts needed to achieve near complete adherence in patient populations. The objective of the study was to collect normative data for medication nonadherence in the general population.

Methods and Findings

We assessed 2,512 persons (a representative sample of German population). Adherence was measured by Rief Adherence Index. We also assessed current medication intake and side effects. We found that at least 33% of Germans repeatedly fail to follow their doctor''s recommendations regarding pharmacological treatments and only 25% of Germans describe themselves as fully adherent. Nonadherence to medication occurs more often in younger patients with higher socioeconomic status taking short-term medications than in older patients with chronic conditions. Experience with medication side effects was the most prominent predictor of nonadherence.

Conclusions

The major strengths of our study are a representative sample and a novel approach to assess adherence. Nonadherece seems to be commonplace in the general population. Therefore adherence cannot be expected per se but needs special efforts on behalf of prescribers and public health initiatives. Nonadherence to medication should not only be considered as a drug-specific behaviour problem, but as a behaviour pattern that is independent of the prescribed medication.  相似文献   

5.
6.
Sex differences in the genetic architecture of behavioral traits can offer critical insight into the processes of sex‐specific selection and sexual conflict dynamics. Here, we assess genetic variances and cross‐sex genetic correlations of two personality traits, aggression and activity, in a sexually size‐dimorphic spider, Nuctenea umbratica. Using a quantitative genetic approach, we show that both traits are heritable. Males have higher heritability estimates for aggressiveness compared to females, whereas the coefficient of additive genetic variation and evolvability did not differ between the sexes. Furthermore, we found sex differences in the coefficient of residual variance in aggressiveness with females exhibiting higher estimates. In contrast, the quantitative genetic estimates for activity suggest no significant differentiation between males and females. We interpret these results with caution as the estimates of additive genetic variances may be inflated by nonadditive genetic effects. The mean cross‐sex genetic correlations for aggression and activity were 0.5 and 0.6, respectively. Nonetheless, credible intervals of both estimates were broad, implying high uncertainty for these estimates. Future work using larger sample sizes would be needed to draw firmer conclusions on how sexual selection shapes sex differences in the genetic architecture of behavioral traits.  相似文献   

7.
Evolutionary diversification of a phenotypic trait reflects the tempo and mode of trait evolution, as well as the phylogenetic topology and branch lengths. Comparisons of trait variance between sister groups provide a powerful approach to test for differences in rates of diversification, controlling for differences in clade age. We used simulation analyses under constant rate Brownian motion to develop phylogenetically based F-tests of the ratio of trait variances between sister groups. Random phylogenies were used for a generalized evolutionary null model, so that detailed internal phylogenies are not required, and both gradual and speciational models of evolution were considered. In general, phylogenetically structured tests were more conservative than corresponding parametric statistics (i.e., larger variance ratios are required to achieve significance). The only exception was for comparisons under a speciational evolutionary model when the group with higher variance has very low sample size (number of species). The methods were applied to a large data set on seed size for 1976 species of California flowering plants. Seven of 37 sister-group comparisons were significant for the phylogenetically structured tests (compared to 12 of 37 for the parametric F-test). Groups with higher diversification of seed size generally had a greater diversity of fruit types, life form, or life history as well. The F-test for trait variances provides a simple, phylogenetically structured approach to test for differences in rates of phenotypic diversification and could also provide a valuable tool in the study of adaptive radiations.  相似文献   

8.
The determination of the sample size required by a crossover trial typically depends on the specification of one or more variance components. Uncertainty about the value of these parameters at the design stage means that there is often a risk a trial may be under‐ or overpowered. For many study designs, this problem has been addressed by considering adaptive design methodology that allows for the re‐estimation of the required sample size during a trial. Here, we propose and compare several approaches for this in multitreatment crossover trials. Specifically, regulators favor reestimation procedures to maintain the blinding of the treatment allocations. We therefore develop blinded estimators for the within and between person variances, following simple or block randomization. We demonstrate that, provided an equal number of patients are allocated to sequences that are balanced for period, the proposed estimators following block randomization are unbiased. We further provide a formula for the bias of the estimators following simple randomization. The performance of these procedures, along with that of an unblinded approach, is then examined utilizing three motivating examples, including one based on a recently completed four‐treatment four‐period crossover trial. Simulation results show that the performance of the proposed blinded procedures is in many cases similar to that of the unblinded approach, and thus they are an attractive alternative.  相似文献   

9.
The problem of poor patient adherence has been extensively researched, but the rates of nonadherence have not changed much in the past 3 decades. Healthcare providers play a unique and important role in assisting patients' healthy behavior changes. We conducted a narrative review of the current literature to help providers become more familiar with proven interventions that can enhance patient adherence. We then grouped the interventions into categories that can be remembered by the mnemonic "SIMPLE":1. Simplifying regimen characteristics; 2. Imparting knowledge; 3. Modifying patient beliefs; 4. Leaving the bias; and 6. Evaluating adherence. Chronic lifestyle behavior change often requires a combination of all the aforementioned strategies. We suggest a conceptual framework, which calls for a multidisciplinary approach with the above strategies in the context of a healthcare team and system-related factors. We hope that this framework would not only help design scientifically proven interventions, but also reduce the time and cost involved with implementing these strategies in a healthcare setting.  相似文献   

10.
Nonadherence with prescribed drug regimens is a pervasive medical problem. Multiple variables affecting physicians and patients contribute to nonadherence, which negatively affects treatment outcomes. In patients with hypertension, medication nonadherence is a significant, often unrecognized, risk factor that contributes to poor blood pressure control, thereby contributing to the development of further vascular disorders such as heart failure, coronary heart disease, renal insufficiency, and stroke. Analysis of various patient populations shows that choice of drug, use of concomitant medications, tolerability of drug, and duration of drug treatment influence the prevalence of nonadherence. Intervention is required among patients and healthcare prescribers to increase awareness of the need for improved medication adherence. Within this process, it is important to identify indicators of nonadherence within patient populations. This review examines the prevalence of nonadherence as a risk factor in the management of chronic diseases, with a specific focus on antihypertensive medications. Factors leading to increased incidence of nonadherence and the strategies needed to improve adherence are discussed. Medication nonadherence, defined as a patient's passive failure to follow a prescribed drug regimen, remains a significant concern for healthcare professionals and patients. On average, one third to one half of patients do not comply with prescribed treatment regimens.[1-3] Nonadherence rates are relatively high across disease states, treatment regimens, and age groups, with the first several months of therapy characterized by the highest rate of discontinuation.[3] In fact, it has recently been reported that low adherence to beta-blockers or statins in patients who have survived a myocardial infarction results in an increased risk of death.[4] In addition to inadequate disease control, medication nonadherence results in a significant burden to healthcare utilization - the estimated yearly cost is $396 to $792 million.[1] Additionally, between one third and two thirds of all medication-related hospital admissions are attributed to nonadherence.[5,6]Cardiovascular disease, which accounts for approximately 1 million deaths in the United States each year, remains a significant health concern.[7] Risk factors for the development of cardiovascular disease are associated with defined risk-taking behaviors (eg, smoking), inherited traits (eg, family history), or laboratory abnormalities (eg, abnormal lipid panels).[7] A significant but often unrecognized cardiovascular risk factor universal to all patient populations is medication nonadherence; if a patient does not regularly take the medication prescribed to attenuate cardiovascular disease, no potential therapeutic gain can be achieved. Barriers to medication adherence are multifactorial and include complex medication regimens, convenience factors (eg, dosing frequency), behavioral factors, and treatment of asymptomatic conditions.[2] This review highlights the significance of nonadherence in the treatment of hypertension, a silent but life-threatening disorder that affects approximately 72 million adults in the United States.[7] Hypertension often develops in a cluster with insulin resistance, obesity, and hypercholesterolemia, which contributes to the risk imposed by nonadherence with antihypertensive medications. Numerous strategies to improve medication adherence are available, from enhancing patient education to providing medication adherence information to the healthcare team and will be discussed in this article.  相似文献   

11.
Summary In estimation of the ROC curve, when the true disease status is subject to nonignorable missingness, the observed likelihood involves the missing mechanism given by a selection model. In this article, we proposed a likelihood‐based approach to estimate the ROC curve and the area under the ROC curve when the verification bias is nonignorable. We specified a parametric disease model in order to make the nonignorable selection model identifiable. With the estimated verification and disease probabilities, we constructed four types of empirical estimates of the ROC curve and its area based on imputation and reweighting methods. In practice, a reasonably large sample size is required to estimate the nonignorable selection model in our settings. Simulation studies showed that all four estimators of ROC area performed well, and imputation estimators were generally more efficient than the other estimators proposed. We applied the proposed method to a data set from research in Alzheimer's disease.  相似文献   

12.
Previously, we showed that in randomised experiments, correction for measurement error in a baseline variable induces bias in the estimated treatment effect, and conversely that ignoring measurement error avoids bias. In observational studies, non-zero baseline covariate differences between treatment groups may be anticipated. Using a graphical approach, we argue intuitively that if baseline differences are large, failing to correct for measurement error leads to a biased estimate of the treatment effect. In contrast, correction eliminates bias if the true and observed baseline differences are equal. If this equality is not satisfied, the corrected estimator is also biased, but typically less so than the uncorrected estimator. Contrasting these findings, we conclude that there must be a threshold for the true baseline difference, above which correction is worthwhile. We derive expressions for the bias of the corrected and uncorrected estimators, as functions of the correlation of the baseline variable with the study outcome, its reliability, the true baseline difference, and the sample sizes. Comparison of these expressions defines a theoretical decision threshold about whether to correct for measurement error. The results show that correction is usually preferred in large studies, and also in small studies with moderate baseline differences. If the group sample sizes are very disparate, correction is less advantageous. If the equivalent balanced sample size is less than about 25 per group, one should correct for measurement error if the true baseline difference is expected to exceed 0.2-0.3 standard deviation units. These results are illustrated with data from a cohort study of atherosclerosis.  相似文献   

13.
The internal pilot study design enables to estimate nuisance parameters required for sample size calculation on the basis of data accumulated in an ongoing trial. By this, misspecifications made when determining the sample size in the planning phase can be corrected employing updated knowledge. According to regulatory guidelines, blindness of all personnel involved in the trial has to be preserved and the specified type I error rate has to be controlled when the internal pilot study design is applied. Especially in the late phase of drug development, most clinical studies are run in more than one centre. In these multicentre trials, one may have to deal with an unequal distribution of the patient numbers among the centres. Depending on the type of the analysis (weighted or unweighted), unequal centre sample sizes may lead to a substantial loss of power. Like the variance, the magnitude of imbalance is difficult to predict in the planning phase. We propose a blinded sample size recalculation procedure for the internal pilot study design in multicentre trials with normally distributed outcome and two balanced treatment groups that are analysed applying the weighted or the unweighted approach. The method addresses both uncertainty with respect to the variance of the endpoint and the extent of disparity of the centre sample sizes. The actual type I error rate as well as the expected power and sample size of the procedure is investigated in simulation studies. For the weighted analysis as well as for the unweighted analysis, the maximal type I error rate was not or only minimally exceeded. Furthermore, application of the proposed procedure led to an expected power that achieves the specified value in many cases and is throughout very close to it.  相似文献   

14.
In the precision medicine era, (prespecified) subgroup analyses are an integral part of clinical trials. Incorporating multiple populations and hypotheses in the design and analysis plan, adaptive designs promise flexibility and efficiency in such trials. Adaptations include (unblinded) interim analyses (IAs) or blinded sample size reviews. An IA offers the possibility to select promising subgroups and reallocate sample size in further stages. Trials with these features are known as adaptive enrichment designs. Such complex designs comprise many nuisance parameters, such as prevalences of the subgroups and variances of the outcomes in the subgroups. Additionally, a number of design options including the timepoint of the sample size review and timepoint of the IA have to be selected. Here, for normally distributed endpoints, we propose a strategy combining blinded sample size recalculation and adaptive enrichment at an IA, that is, at an early timepoint nuisance parameters are reestimated and the sample size is adjusted while subgroup selection and enrichment is performed later. We discuss implications of different scenarios concerning the variances as well as the timepoints of blinded review and IA and investigate the design characteristics in simulations. The proposed method maintains the desired power if planning assumptions were inaccurate and reduces the sample size and variability of the final sample size when an enrichment is performed. Having two separate timepoints for blinded sample size review and IA improves the timing of the latter and increases the probability to correctly enrich a subgroup.  相似文献   

15.
Wijsman EM  Nur N 《Human heredity》2001,51(3):145-149
The measured genotype approach can be used to estimate the variance contributions of specific candidate loci to quantitative traits of interest. We show here that both the naive estimate of measured-locus heritability, obtained by invoking infinite-sample theory, and an estimate obtained from a bias-corrected variance estimate based on finite-sample theory, produce biased estimates of heritability. We identify the sources of bias, and quantify their effects. The two sources of bias are: (1) the estimation of heritability from population samples as the ratio of two variances, and (2) the existence of sampling error. We show that neither heritability estimator is less biased (in absolute value) than the other in all situations, and the choice of an ideal estimator is therefore a function of the sample size and magnitude of the locus-specific contribution to the overall phenotypic variance. In most cases the bias is small, so that the practical implications of using either estimator are expected to be minimal.  相似文献   

16.
Li Z  Murphy SA 《Biometrika》2011,98(3):503-518
Two-stage randomized trials are growing in importance in developing adaptive treatment strategies, i.e. treatment policies or dynamic treatment regimes. Usually, the first stage involves randomization to one of the several initial treatments. The second stage of treatment begins when an early nonresponse criterion or response criterion is met. In the second-stage, nonresponding subjects are re-randomized among second-stage treatments. Sample size calculations for planning these two-stage randomized trials with failure time outcomes are challenging because the variances of common test statistics depend in a complex manner on the joint distribution of time to the early nonresponse criterion or response criterion and the primary failure time outcome. We produce simple, albeit conservative, sample size formulae by using upper bounds on the variances. The resulting formulae only require the working assumptions needed to size a standard single-stage randomized trial and, in common settings, are only mildly conservative. These sample size formulae are based on either a weighted Kaplan-Meier estimator of survival probabilities at a fixed time-point or a weighted version of the log-rank test.  相似文献   

17.

Summary

Omission of relevant covariates can lead to bias when estimating treatment or exposure effects from survival data in both randomized controlled trials and observational studies. This paper presents a general approach to assessing bias when covariates are omitted from the Cox model. The proposed method is applicable to both randomized and non‐randomized studies. We distinguish between the effects of three possible sources of bias: omission of a balanced covariate, data censoring and unmeasured confounding. Asymptotic formulae for determining the bias are derived from the large sample properties of the maximum likelihood estimator. A simulation study is used to demonstrate the validity of the bias formulae and to characterize the influence of the different sources of bias. It is shown that the bias converges to fixed limits as the effect of the omitted covariate increases, irrespective of the degree of confounding. The bias formulae are used as the basis for developing a new method of sensitivity analysis to assess the impact of omitted covariates on estimates of treatment or exposure effects. In simulation studies, the proposed method gave unbiased treatment estimates and confidence intervals with good coverage when the true sensitivity parameters were known. We describe application of the method to a randomized controlled trial and a non‐randomized study.  相似文献   

18.
Differences or similarities in the variance of fitness traits are crucial in several biological disciplines, e.g. ecological, toxicological, developmental and evolutionary studies. For example the variance of traits can be utilized as a biomarker of differences in environmental conditions. In the absence of environmental variability, the differences of the variance of a trait can be interpreted as differences of the genetic background. Several tests and transformations are utilized when testing differences between variances. There is, however, a biological tendency for the variance to scale proportionally to the square of the mean (scaling effect) which can considerably bias the results of the tests. We propose a novel method which allows for a more precise correction of the scaling effect and proper comparisons among treatment groups and between investigations. This is relevant for all data sets of distributions with different means and suggests the reanalysis of comparisons among treatment groups. This correction will provide a more reliable method when using bioindicators.  相似文献   

19.
Daye ZJ  Chen J  Li H 《Biometrics》2012,68(1):316-326
We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis.  相似文献   

20.
Genomewide association studies are now a widely used approach in the search for loci that affect complex traits. After detection of significant association, estimates of penetrance and allele-frequency parameters for the associated variant indicate the importance of that variant and facilitate the planning of replication studies. However, when these estimates are based on the original data used to detect the variant, the results are affected by an ascertainment bias known as the "winner's curse." The actual genetic effect is typically smaller than its estimate. This overestimation of the genetic effect may cause replication studies to fail because the necessary sample size is underestimated. Here, we present an approach that corrects for the ascertainment bias and generates an estimate of the frequency of a variant and its penetrance parameters. The method produces a point estimate and confidence region for the parameter estimates. We study the performance of this method using simulated data sets and show that it is possible to greatly reduce the bias in the parameter estimates, even when the original association study had low power. The uncertainty of the estimate decreases with increasing sample size, independent of the power of the original test for association. Finally, we show that application of the method to case-control data can improve the design of replication studies considerably.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号