首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Random effects models are widely used in population pharmacokinetics and dose-finding studies. However, when more than one observation is taken per patient, the presence of correlated observations (due to shared random effects and possibly residual serial correlation) usually makes the explicit determination of optimal designs difficult. In this article, we introduce a class of multiplicative algorithms to be able to handle correlated data and thus allow numerical calculation of optimal experimental designs in such situations. In particular, we demonstrate its application in a concrete example of a crossover dose-finding trial, as well as in a typical population pharmacokinetics example. Additionally, we derive a lower bound for the efficiency of any given design in this context, which allows us on the one hand to monitor the progress of the algorithm, and on the other hand to investigate the efficiency of a given design without knowing the optimal one. Finally, we extend the methodology such that it can be used to determine optimal designs if there exist some requirements regarding the minimal number of treatments for several (in some cases all) experimental conditions.  相似文献   

2.
3.
Given recent advances in the field of molecular genetics, many have recognized the need to exploit either study designs or analytical methods to test hypotheses with gene-by-environment (G x E) interactions. The partial-collection designs, including case-only, partial case-control, and case-parent trio designs, have been suggested as attractive alternatives to the complete case-control design both for increased statistical efficiency and reduced data needs. However, common problems in genetic epidemiology studies, such as, presence of G x E correlation in the population, population mixture, and genotyping error may reduce the validity of these designs. On the basis of previous simulation studies and empirical data and given the potential limitations and uncertainty of assumptions of partial-collection designs, the case-control design is the optimal choice versus partial-collection designs.  相似文献   

4.
Design of two-phase prevalence surveys of rare disorders   总被引:2,自引:0,他引:2  
P E Shrout  S C Newman 《Biometrics》1989,45(2):549-555
Two-phase medical surveys, in which a large sample is assessed with an inexpensive screening instrument and a subsample is selected for a more thorough diagnostic evaluation, appear to have great merit in the epidemiologic study of certain rare disorders. We present the optimal design of two-phase surveys when resources are fixed and when 100% of those screened positive in the first phase must be included in the second-phase evaluation. We go on to examine the relative efficiency of this two-phase design compared to a single-phase design in which all resources are used in a survey that employs the diagnostic evaluation. Given information on the accuracy of the screen and the prevalence of the disorder, the utility of the two-phase design depends on the relative cost of the screening to the diagnostic assessments.  相似文献   

5.
This paper discusses two measures of unbalancedness in a one-way model and shows that for a given statistical procedure they may serve as measures of efficiency of a design. They also allow to compare for example estimation methods for variance components in designs with a fixed level of unbalancedness.  相似文献   

6.
The mechanisms behind the superiority of optimal biphasic defibrillation shocks over monophasic are not fully understood. This simulation study examines how the shock polarity and second-phase magnitude of biphasic shocks influence the virtual electrode polarization (VEP) pattern, and thus the outcome of the shock in a bidomain model representation of ventricular myocardium. A single spiral wave is initiated in a two-dimensional sheet of myocardium that measures 2 x 2 cm(2). The model incorporates non-uniform fiber curvature, membrane kinetics suitable for high strength shocks, and electroporation. Line electrodes deliver a spatially uniform extracellular field. The shocks are biphasic, each phase lasting 10 ms. Two different polarities of biphasic shocks are examined as the first-phase configuration is held constant and the second-phase magnitude is varied between 1 and 10 V/cm. The results show that for each polarity, varying the second-phase magnitude reverses the VEP induced by the first phase in an asymmetric fashion. Further, the size of the post-shock excitable gap is dependent upon the second-phase magnitude and is a factor in determining the success or failure of the shock. The maximum size of a post-shock excitable gap that results in defibrillation success depends on the polarity of the shock, indicating that the refractoriness of the tissue surrounding the gap also contributes to the outcome of the shock.  相似文献   

7.
T R Fears  C C Brown 《Biometrics》1986,42(4):955-960
There are a number of possible designs for case-control studies. The simplest uses two separate simple random samples, but an actual study may use more complex sampling procedures. Typically, stratification is used to control for the effects of one or more risk factors in which we are interested. It has been shown (Anderson, 1972, Biometrika 59, 19-35; Prentice and Pyke, 1979, Biometrika 66, 403-411) that the unconditional logistic regression estimators apply under stratified sampling, so long as the logistic model includes a term for each stratum. We consider the case-control problem with stratified samples and assume a logistic model that does not include terms for strata, i.e., for fixed covariates the (prospective) probability of disease does not depend on stratum. We assume knowledge of the proportion sampled in each stratum as well as the total number in the stratum. We use this knowledge to obtain the maximum likelihood estimators for all parameters in the logistic model including those for variables completely associated with strata. The approach may also be applied to obtain estimators under probability sampling.  相似文献   

8.
Exposure to infection information is important for estimating vaccine efficacy, but it is difficult to collect and prone to missingness and mismeasurement. We discuss study designs that collect detailed exposure information from only a small subset of participants while collecting crude exposure information from all participants and treat estimation of vaccine efficacy in the missing data/measurement error framework. We extend the discordant partner design for HIV vaccine trials of Golm, Halloran, and Longini (1998, Statistics in Medicine, 17, 2335-2352.) to the more complex augmented trial design of Longini, Datta, and Halloran (1996, Journal of Acquired Immune Deficiency Syndromes and Human Retrovirology 13, 440-447) and Datta, Halloran, and Longini (1998, Statistics in Medicine 17, 185-200). The model for this design includes three exposure covariates and both univariate and bivariate outcomes. We adapt recently developed semiparametric missing data methods of Reilly and Pepe (1995, Biometrika 82, 299 314), Carroll and Wand (1991, Journal of the Royal Statistical Society, Series B 53, 573-585), and Pepe and Fleming (1991, Journal of the American Statistical Association 86, 108-113) to the augmented vaccine trial design. We demonstrate with simulated HIV vaccine trial data the improvements in bias and efficiency when combining the different levels of exposure information to estimate vaccine efficacy for reducing both susceptibility and infectiousness. We show that the semiparametric methods estimate both efficacy parameters without bias when the good exposure information is either missing completely at random or missing at random. The pseudolikelihood method of Carroll and Wand (1991) and Pepe and Fleming (1991) was the more efficient of the two semiparametric methods.  相似文献   

9.
Insulin secretion and rate of utilization (R(d)) of glucose were tested during a newly developed sequential clamp in 42 highly trained female athletes (A; 18-69 yr old) and 14 sedentary control women (C; 18--50 yr old; body mass index <25 kg/m(2)). The A women were categorized into four age groups: 18--29, 30--39, 40--49, and 50--69 yr old. The C women were also grouped by age (18--29 and 40--50 yr old). During the three-step clamp (hyperglycemia, return to euglycemia, and hyperinsulinemia), glucose turnover was assessed with [3-(3)H]glucose. Among the A, the youngest group had the largest first- and second-phase insulin response, which was significantly different from the oldest A (P < 0.05). Among the two C groups, first-phase response of both groups and second-phase response of the older group was higher than respective age-matched A (P < 0.05). During the hyperglycemic period, glucose R(d) was similar among A groups and between A and C. Despite similar levels of insulin between groups during the hyperinsulinemic period (approximately 400 pmol/l), A utilized 36% more glucose than C (P < 0.001). Glucose R(d) was not different across the age groups of A. This newly developed sequential clamp procedure allows assessment of both beta-cell sensitivity to glucose and peripheral tissue sensitivity to insulin in a single session. We have shown that physical activity improves beta-cell efficiency across the age span in women and ameliorates the effect of age on the decline of peripheral tissue sensitivity to insulin.  相似文献   

10.
Optimal response-adaptive designs in phase III clinical trial set up are gaining more interest. Most of the available designs are not based on any optimal consideration. An optimal design for binary responses is given by Rosenberger et al. (2001) and one for continuous responses is provided by Biswas and Mandal (2004). Recently, Zhang and Rosenberger (2006) proposed another design for normal responses. This paper illustrates that the Zhang and Rosenberger (2006) design is not suitable for normally distributed responses, in general. The approach cannot be extended for other continuous response cases, such as exponential or gamma. In this paper, we first describe when the optimal design of Zhang and Rosenberger (2006) fails. We then suggest the appropriate adjustments for designs in different continuous distributions. A unified framework to find optimal response-adaptive designs for two competing treatments is proposed. The proposed methods are illustrated using some real data.  相似文献   

11.
Summary .  It is well known that optimal designs are strongly model dependent. In this article, we apply the Lagrange multiplier approach to the optimal design problem, using a recently proposed model for carryover effects. Generally, crossover designs are not recommended when carryover effects are present and when the primary goal is to obtain an unbiased estimate of the treatment effect. In some cases, baseline measurements are believed to improve design efficiency. This article examines the impact of baselines on optimal designs using two different assumptions about carryover effects during baseline periods and employing a nontraditional crossover design model. As anticipated, baseline observations improve design efficiency considerably for two-period designs, which use the data in the first period only to obtain unbiased estimates of treatment effects, while the improvement is rather modest for three- or four-period designs. Further, we find little additional benefits for measuring baselines at each treatment period as compared to measuring baselines only in the first period. Although our study of baselines did not change the results on optimal designs that are reported in the literature, the problem of strong model dependency problem is generally recognized. The advantage of using multiperiod designs is rather evident, as we found that extending two-period designs to three- or four-period designs significantly reduced variability in estimating the direct treatment effect contrast.  相似文献   

12.
Tang L  Emerson SS  Zhou XH 《Biometrics》2008,64(4):1137-1145
SUMMARY: Comparison of the accuracy of two diagnostic tests using the receiver operating characteristic (ROC) curves from two diagnostic tests has been typically conducted using fixed sample designs. On the other hand, the human experimentation inherent in a comparison of diagnostic modalities argues for periodic monitoring of the accruing data to address many issues related to the ethics and efficiency of the medical study. To date, very little research has been done on the use of sequential sampling plans for comparative ROC studies, even when these studies may use expensive and unsafe diagnostic procedures. In this article we propose a nonparametric group sequential design plan. The nonparametric sequential method adapts a nonparametric family of weighted area under the ROC curve statistics (Wieand et al., 1989, Biometrika 76, 585-592) and a group sequential sampling plan. We illustrate the implementation of this nonparametric approach for sequentially comparing ROC curves in the context of diagnostic screening for nonsmall-cell lung cancer. We also describe a semiparametric sequential method based on proportional hazard models. We compare the statistical properties of the nonparametric approach with alternative semiparametric and parametric analyses in simulation studies. The results show the nonparametric approach is robust to model misspecification and has excellent finite-sample performance.  相似文献   

13.
McNamee R 《Biometrics》2004,60(3):783-792
Two-phase designs for estimation of prevalence, where the first-phase classification is fallible and the second is accurate but relatively expensive, are not necessarily justified on efficiency grounds. However, they might be advantageous for dual-purpose studies, for example where prevalence estimation is followed by a clinical trial or case-control study, if they can identify cases of disease for the second study in a cost-effective way. Alternatively, they may be justified on ethical grounds if they can identify more, previously undetected but treatable cases of disease, than a simple random sample design. An approach to sampling is proposed, which formally combines the goals of efficient prevalence estimation and case detection by setting different notional study costs for investigating cases and noncases. Two variants of the method are compared with an "ethical" two-phase scheme proposed by Shrout and Newman (1989, Biometrics 45, 549-555), and with the most efficient scheme for prevalence estimation alone, in terms of the standard error of the prevalence estimate, the expected number of cases, and the fraction of cases among second-phase subjects, given a fixed budget. One variant yields the highest fraction and expected number of cases but also the largest standard errors. The other yields a higher fraction than Shrout and Newman's scheme and a similar number of cases but appears to do so more efficiently.  相似文献   

14.
Experimental designs are definded by introducing an assignment matrix Z. It is shown by block designs and double block designs that using Z or an operator on Z otherwise defined, well known designs can be got as special cases. Till now we didn' find an experimental design which could not be defined by our matrix Z. The definitions of properties of experimental designs can be given independently of the model of the statistical analysis. This is shown for the property of balance of block designs.  相似文献   

15.
Summary The two‐stage case–control design has been widely used in epidemiology studies for its cost‐effectiveness and improvement of the study efficiency ( White, 1982 , American Journal of Epidemiology 115, 119–128; Breslow and Cain, 1988 , Biometrika 75, 11–20). The evolution of modern biomedical studies has called for cost‐effective designs with a continuous outcome and exposure variables. In this article, we propose a new two‐stage outcome‐dependent sampling (ODS) scheme with a continuous outcome variable, where both the first‐stage data and the second‐stage data are from ODS schemes. We develop a semiparametric empirical likelihood estimation for inference about the regression parameters in the proposed design. Simulation studies were conducted to investigate the small‐sample behavior of the proposed estimator. We demonstrate that, for a given statistical power, the proposed design will require a substantially smaller sample size than the alternative designs. The proposed method is illustrated with an environmental health study conducted at National Institutes of Health.  相似文献   

16.
Most existing phase II clinical trial designs focus on conventional chemotherapy with binary tumor response as the endpoint. The advent of novel therapies, such as molecularly targeted agents and immunotherapy, has made the endpoint of phase II trials more complicated, often involving ordinal, nested, and coprimary endpoints. We propose a simple and flexible Bayesian optimal phase II predictive probability (OPP) design that handles binary and complex endpoints in a unified way. The Dirichlet-multinomial model is employed to accommodate different types of endpoints. At each interim, given the observed interim data, we calculate the Bayesian predictive probability of success, should the trial continue to the maximum planned sample size, and use it to make the go/no-go decision. The OPP design controls the type I error rate, maximizes power or minimizes the expected sample size, and is easy to implement, because the go/no-go decision boundaries can be enumerated and included in the protocol before the onset of the trial. Simulation studies show that the OPP design has satisfactory operating characteristics.  相似文献   

17.
McNemar's test is popular for assessing the difference between proportions when two observations are taken on each experimental unit. It is useful under a variety of epidemiological study designs that produce correlated binary outcomes. In studies involving outcome ascertainment, cost or feasibility concerns often lead researchers to employ error-prone surrogate diagnostic tests. Assuming an available gold standard diagnostic method, we address point and confidence interval estimation of the true difference in proportions and the paired-data odds ratio by incorporating external or internal validation data. We distinguish two special cases, depending on whether it is reasonable to assume that the diagnostic test properties remain the same for both assessments (e.g., at baseline and at follow-up). Likelihood-based analysis yields closed-form estimates when validation data are external and requires numeric optimization when they are internal. The latter approach offers important advantages in terms of robustness and efficient odds ratio estimation. We consider internal validation study designs geared toward optimizing efficiency given a fixed cost allocated for measurements. Two motivating examples are presented, using gold standard and surrogate bivariate binary diagnoses of bacterial vaginosis (BV) on women participating in the HIV Epidemiology Research Study (HERS).  相似文献   

18.
Sutradhar BC  Das K 《Biometrics》2000,56(2):622-625
Liang and Zeger (1986, Biometrika 73, 13-22) introduced a generalized estimating equation (GEE) approach based on a working correlation matrix to obtain efficient estimators of regression parameters in the class of generalized linear models for repeated measures data. As demonstrated by Crowder (1995, Biometrika 82, 407-410), because of uncertainty of the definition of the working correlation matrix, the Liang-Zeger approach may, in some cases, lead to a complete breakdown of the estimation of the regression parameters. After taking this comment of Crowder into account, recently Sutradhar and Das (1999, Biometrika 86, 459-465) examined the loss of efficiency of the regression estimators due to misspecification of the correlation structures. But their study was confined to the regression estimation with cluster-level covariates, as in the original paper of Liang and Zeger. In this paper, we study this efficiency loss problem for the generalized regression models with within-cluster covariates by utilizing the approach of Sutradhar and Das (1999).  相似文献   

19.
To assess tree growth, for example in diameter, a forester typically measures the trees at regular time points. We call such designs equidistant. In this paper we look at the robustness and efficiency of several experimental designs, using the D‐optimality criterion, in a case study of diameter growth in cork oaks. We compare D‐optimal designs (unrestricted and replication‐free) with equidistant designs. We further compare designs in different experimental regions. Results indicate that the experimental region should be adequate to the problem, and that D‐optimal designs are substantially more efficient than equidistant designs, even under parameter mis‐specification.  相似文献   

20.
According to conventional wisdom, functional diversity is exclusively a consequence of species having evolved adaptations to fill different niches within a heterogeneous environment. This view anticipates only one optimal combination of trait values in a given environment, but it is also conceivable that alternative designs of equal fitness in the same environment might evolve. To investigate that possibility, we use a genetic algorithm to search for optimal combinations of 34 functional traits in a realistic model of tree seedling growth and survival. We show that separate lineages of seedlings evolving in identical environments result in many alternative functional designs of approximately equal fitness.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号