首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ouwens MJ  Tan FE  Berger MP 《Biometrics》2002,58(4):735-741
In this article, the optimal selection and allocation of time points in repeated measures experiments is considered. D-optimal cohort designs are computed numerically for the first- and second-degree polynomial models with random intercept, random slope, and first-order autoregressive serial correlations. Because the optimal designs are locally optimal, it is proposed to use a maximin criterion. It is shown that, for a large class of symmetric designs, the smallest relative efficiency over the model parameter space is substantial.  相似文献   

2.
Optimal designs when the variance is a function of the mean   总被引:1,自引:0,他引:1  
Dette H  Wong WK 《Biometrics》1999,55(3):925-929
We develop locally D-optimal designs for nonlinear models when the variance of the response is a function of its mean. Using the two-parameter Michaelis-Menten model as an example, we show that the optimal design depends on both the type of heteroscedasticity and the magnitude of the variation. In addition, our results suggest that the homoscedastic D-optimal design has high efficiency under a broad class of heteroscedastic patterns and that it is fairly insensitive to nominal values of the parameters.  相似文献   

3.
The D-optimal design, a minimal sample design that minimizes the volume of the joint confidence region for the parameters, was used to evaluate binding parameters in a saturation curve with a view to reducing the number of experimental points without loosing accuracy in binding parameter estimates. Binding saturation experiments were performed in rat brain crude membrane preparations with the opioid mu-selective ligand [3H]-[D-Ala2,MePhe4,Gly-ol5]enkephalin (DAGO), using a sequential procedure. The first experiment consisted of a wide-range saturation curve, which confirmed that [3H]-DAGO binds only one class of specific sites and non-specific sites, and gave information on the experimental range and a first estimate of binding affinity (Ka), capacity (Bmax) and non-specific constant (k). On this basis the D-optimal design was computed and sequential experiments were performed each covering a wide-range traditional saturation curve, the D-optimal design and a splitting of the D-optimal design with the addition of 2 points (+/- 15% of the central point). No appreciable differences were obtained with these designs in parameter estimates and their accuracy. Thus sequential experiments based on D-optimal design seem a valid method for accurate determination of binding parameters, using far fewer points with no loss in parameter estimation accuracy.  相似文献   

4.
We have developed a computer program, DESIGN, for optimization of ligand binding experiments to minimize the "average" uncertainty in all unknown parameters. An earlier report [G. E. Rovati, D. Rodbard, and P. J. Munson (1988) Anal. Biochem. 174, 636-649] described the application of this program to experiments involving a single homologous or heterologous dose-response curve. We now present several advanced features of the program DESIGN, including simultaneous optimization of two or more binding competition curves optimization of a "multiligand" experiment. Multiligand designs are those which use combinations of two (or more) ligands in each reaction tube. Such designs are an important and natural extension of the popular method of "blocking experiments" where an additional ligand is used to suppress one or more classes of sites. Extending the idea of a dose-response curve, the most general multiligand design would result in a "dose-response surface". One can now optimize the design not only for a single binding curve, but also for families of curves and for binding surfaces. The examples presented in this report further demonstrate the power and utility of the program DESIGN and the nature of D-optimal designs in the context of more complex binding experiments. We illustrate D-optimal designs involving one radioligand and two unlabeled ligands; we consider one example of homogeneous and several examples of heterogeneous binding sites. Further, to demonstrate the virtues of the dose-response surface experiment, we have compared the optimal surface design to the equivalent design restricted to traditional dose-response curves. The use of DESIGN in conjunction with multiligand experiments can improve the efficiency of estimation of the binding parameters, potentially resulting in reduction of the number of observations needed to obtain a desired degree of precision in representative cases.  相似文献   

5.
A broad approach to the design of Phase I clinical trials for the efficient estimation of the maximum tolerated dose is presented. The method is rooted in formal optimal design theory and involves the construction of constrained Bayesian c- and D-optimal designs. The imposed constraint incorporates the optimal design points and their weights and ensures that the probability that an administered dose exceeds the maximum acceptable dose is low. Results relating to these constrained designs for log doses on the real line are described and the associated equivalence theorem is given. The ideas are extended to more practical situations, specifically to those involving discrete doses. In particular, a Bayesian sequential optimal design scheme comprising a pilot study on a small number of patients followed by the allocation of patients to doses one at a time is developed and its properties explored by simulation.  相似文献   

6.
We have developed a versatile computer program for optimization of ligand binding experiments (e.g., radioreceptor assay system for hormones, drugs, etc.). This optimization algorithm is based on an overall measure of precision of the parameter estimates (D-optimality). The program DESIGN uses an exact mathematical model of the equilibrium ligand binding system with up to two ligands binding to any number of classes of binding sites. The program produces a minimal list of the optimal ligand concentrations for use in the binding experiment. This potentially reduces the time and cost necessary to perform a binding experiment. The program allows comparison of any proposed experimental design with the D-optimal design or with assay protocols in current use. The level of nonspecific binding is regarded as an unknown parameter of the system, along with the affinity constant (Kd) and binding capacity (Bmax). Selected parameters can be fixed at constant values and thereby excluded from the optimization algorithm. Emphasis may be placed on improving the precision of a single parameter or on improving the precision of all the parameters simultaneously. We present optimal designs for several of the more commonly used assay protocols (saturation binding with a single labeled ligand, competition or displacement curve, one or two classes of binding sites), and evaluate the robustness of these designs to changes in parameter values of the underlying models. We also derive the theoretical D-optimal design for the saturation binding experiment with a homogeneous receptor class.  相似文献   

7.
Methods for the analysis of unmatched case-control data based on a finite population sampling model are developed. Under this model, and the prospective logistic model for disease probabilities, a likelihood for case-control data that accommodates very general sampling of controls is derived. This likelihood has the form of a weighted conditional logistic likelihood. The flexibility of the methods is illustrated by providing a number of control sampling designs and a general scheme for their analyses. These include frequency matching, counter-matching, case-base, randomized recruitment, and quota sampling. A study of risk factors for childhood asthma illustrates an application of the counter-matching design. Some asymptotic efficiency results are presented and computational methods discussed. Further, it is shown that a 'marginal' likelihood provides a link to unconditional logistic methods. The methods are examined in a simulation study that compares frequency and counter-matching using conditional and unconditional logistic analyses and indicate that the conditional logistic likelihood has superior efficiency. Extensions that accommodate sampling of cases and multistage designs are presented. Finally, we compare the analysis methods presented here to other approaches, compare counter-matching and two-stage designs, and suggest areas for further research.To whom correspondence should be addressed.  相似文献   

8.
Large-scale surveys, such as national forest inventories and vegetation monitoring programs, usually have complex sampling designs that include geographical stratification and units organized in clusters. When models are developed using data from such programs, a key question is whether or not to utilize design information when analyzing the relationship between a response variable and a set of covariates. Standard statistical regression methods often fail to account for complex sampling designs, which may lead to severely biased estimators of model coefficients. Furthermore, ignoring that data are spatially correlated within clusters may underestimate the standard errors of regression coefficient estimates, with a risk for drawing wrong conclusions. We first review general approaches that account for complex sampling designs, e.g. methods using probability weighting, and stress the need to explore the effects of the sampling design when applying logistic regression models. We then use Monte Carlo simulation to compare the performance of the standard logistic regression model with two approaches to model correlated binary responses, i.e. cluster-specific and population-averaged logistic regression models. As an example, we analyze the occurrence of epiphytic hair lichens in the genus Bryoria; an indicator of forest ecosystem integrity. Based on data from the National Forest Inventory (NFI) for the period 1993–2014 we generated a data set on hair lichen occurrence on  >100,000 Picea abies trees distributed throughout Sweden. The NFI data included ten covariates representing forest structure and climate variables potentially affecting lichen occurrence. Our analyses show the importance of taking complex sampling designs and correlated binary responses into account in logistic regression modeling to avoid the risk of obtaining notably biased parameter estimators and standard errors, and erroneous interpretations about factors affecting e.g. hair lichen occurrence. We recommend comparisons of unweighted and weighted logistic regression analyses as an essential step in development of models based on data from large-scale surveys.  相似文献   

9.
The aim of the present analysis is to combine evidence for association from the two most commonly used designs in genetic association analysis, the case-control design and the transmission disequilibrium test (TDT) design. The cases here are affected offspring from nuclear families and are used in both the case-control and TDT designs. As a result, inference from these designs is not independent. We applied a simple logistic regression method for combining evidence for association from case-control and TDT designs to single-nucleotide polymorphism data purchased on a region on chromosome 3, replicate 1 of the Aipotu population. Combining the evidence from the case-control and TDT designs yielded a 5-10% reduction in the standard errors of the relative risk estimates. The authors did not know the results before the analyses were conducted.  相似文献   

10.
Optimality assessment in the enzyme-linked immunosorbent assay (ELISA)   总被引:1,自引:0,他引:1  
K F Karpinski 《Biometrics》1990,46(2):381-390
An optimality criterion is proposed for evaluating the precision of alternative designs in the enzyme-linked immunosorbent assay. Assay profiles are represented as four-parameter logistic functions with parameter estimation based on either a weighted nonlinear regression or a simple nonlinear regression after a logarithmic transformation. Assay design changes are characterized in terms of their effects on parameters in the four-parameter logistic model. General optimality results are derived for the variance of relative potency estimates in routine assay applications.  相似文献   

11.
C B Begg  L A Kalish 《Biometrics》1984,40(2):409-420
Many clinical trials have a binary outcome variable. If covariate adjustment is necessary in the analysis, the logistic-regression model is frequently used. Optimal designs for allocating treatments for this model, or for any nonlinear or heteroscedastic model, are generally unbalanced with regard to overall treatment totals and totals within strata. However, all treatment-allocation methods that have been recommended for clinical trials in the literature are designed to balance treatments within strata, either directly or asymptotically. In this paper, the efficiencies of balanced sequential allocation schemes are measured relative to sequential Ds-optimal designs for the logistic model, using as examples completed trials conducted by the Eastern Cooperative Oncology Group and systematic simulations. The results demonstrate that stratified, balanced designs are quite efficient, in general. However, complete randomization is frequently inefficient, and will occasionally result in a trial that is very inefficient.  相似文献   

12.
In computerized adaptive testing (CAT), examinees are presented with various sets of items chosen from a precalibrated item pool. Consequently, the attrition speed of the items is extremely fast, and replenishing the item pool is essential. Therefore, item calibration has become a crucial concern in maintaining item banks. In this study, a two-parameter logistic model is used. We applied optimal designs and adaptive sequential analysis to solve this item calibration problem. The results indicated that the proposed optimal designs are cost effective and time efficient.  相似文献   

13.
In many large cohort studies of association between a disease and a concommitant variable, only a small fraction of subjects develope the disease. Substantial computational expense can be avoided by restricting the analysis to the diseased cases and a random sample of disease-free controls. This paper examines the efficiency of such synthetic retrospective designs relative to that of the full cohort analysis when the association is studied using the logistic or proportional hazards model. Within this context the efficiencies of matched vs. unmatched designs are also examined.  相似文献   

14.
T R Fears  C C Brown 《Biometrics》1986,42(4):955-960
There are a number of possible designs for case-control studies. The simplest uses two separate simple random samples, but an actual study may use more complex sampling procedures. Typically, stratification is used to control for the effects of one or more risk factors in which we are interested. It has been shown (Anderson, 1972, Biometrika 59, 19-35; Prentice and Pyke, 1979, Biometrika 66, 403-411) that the unconditional logistic regression estimators apply under stratified sampling, so long as the logistic model includes a term for each stratum. We consider the case-control problem with stratified samples and assume a logistic model that does not include terms for strata, i.e., for fixed covariates the (prospective) probability of disease does not depend on stratum. We assume knowledge of the proportion sampled in each stratum as well as the total number in the stratum. We use this knowledge to obtain the maximum likelihood estimators for all parameters in the logistic model including those for variables completely associated with strata. The approach may also be applied to obtain estimators under probability sampling.  相似文献   

15.
Sequential designs for phase I clinical trials which incorporate maximum likelihood estimates (MLE) as data accrue are inherently problematic because of limited data for estimation early on. We address this problem for small phase I clinical trials with ordinal responses. In particular, we explore the problem of the nonexistence of the MLE of the logistic parameters under a proportional odds model with one predictor. We incorporate the probability of an undetermined MLE as a restriction, as well as ethical considerations, into a proposed sequential optimal approach, which consists of a start‐up design, a follow‐on design and a sequential dose‐finding design. Comparisons with nonparametric sequential designs are also performed based on simulation studies with parameters drawn from a real data set.  相似文献   

16.
The multistage carcinogenesis models describe a process by which a normal cell becomes malignant and gives rise to a tumor. This paper aims at evaluating the percentiles of the risk function derived as dose-response relationship in a multi-stage model. These percentiles have been known as “virtual safe dose” levels or risk specific dose levels. The optimal design theory is applied to estimate the appropriate percentile and the sequential approach of design is adopted through a stochastic approximation scheme. If the initial design is D-optimal the limit design is D-optimal as well and it is the one with the minimum entropy.  相似文献   

17.
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero‐inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation‐maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open‐source R package mpath .  相似文献   

18.
J Benichou  M H Gail 《Biometrics》1990,46(4):991-1003
The attributable risk (AR), defined as AR = [Pr(disease) - Pr(disease/no exposure)]/Pr(disease), measures the proportion of disease risk that is attributable to an exposure. Recently Bruzzi et al. (1985, American Journal of Epidemiology 122, 904-914) presented point estimates of AR based on logistic models for case-control data to allow for confounding factors and secondary exposures. To produce confidence intervals, we derived variance estimates for AR under the logistic model and for various designs for sampling controls. Calculations for discrete exposure and confounding factors require covariances between estimates of the risk parameters of the logistic model and the proportions of cases with given levels of exposure and confounding factors. These covariances are estimated from Taylor series expansions applied to implicit functions. Similar calculations for continuous exposures are derived using influence functions. Simulations indicate that those asymptotic procedures yield reliable variance estimates and confidence intervals with near nominal coverage. An example illustrates the usefulness of variance calculations in selecting a logistic model that is neither so simplified as to exhibit systematic lack of fit nor so complicated as to inflate the variance of the estimate of AR.  相似文献   

19.
Odds ratios approximate risk ratios when the outcome under consideration is rare but can diverge substantially from risk ratios when the outcome is common. In this paper, we derive optimal analytic conversions of odds ratios and hazard ratios to risk ratios that are minimax for the bias ratio when outcome probabilities are specified to fall in any fixed interval. The results for hazard ratios are derived under a proportional hazard assumption for the exposure. For outcome probabilities specified to lie in symmetric intervals centered around 0.5, it is shown that the square-root transformation of the odds ratio is the optimal minimax conversion for the risk ratio. General results for any nonsymmetric interval are given both for odds ratio and for hazard ratio conversions. The results are principally useful when odds ratios or hazard ratios are reported in papers, and the reader does not have access to the data or to information about the overall outcome prevalence.  相似文献   

20.
Yu Z 《Human heredity》2011,71(3):171-179
The case-parents design has been widely used to detect genetic associations as it can prevent spurious association that could occur in population-based designs. When examining the effect of an individual genetic locus on a disease, logistic regressions developed by conditioning on parental genotypes provide complete protection from spurious association caused by population stratification. However, when testing gene-gene interactions, it is unknown whether conditional logistic regressions are still robust. Here we evaluate the robustness and efficiency of several gene-gene interaction tests that are derived from conditional logistic regressions. We found that in the presence of SNP genotype correlation due to population stratification or linkage disequilibrium, tests with incorrectly specified main-genetic-effect models can lead to inflated type I error rates. We also found that a test with fully flexible main genetic effects always maintains correct test size and its robustness can be achieved with negligible sacrifice of its power. When testing gene-gene interactions is the focus, the test allowing fully flexible main effects is recommended to be used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号