首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Inference after two‐stage single‐arm designs with binary endpoint is challenging due to the nonunique ordering of the sampling space in multistage designs. We illustrate the problem of specifying test‐compatible confidence intervals for designs with nonconstant second‐stage sample size and present two approaches that guarantee confidence intervals consistent with the test decision. Firstly, we extend the well‐known Clopper–Pearson approach of inverting a family of two‐sided hypothesis tests from the group‐sequential case to designs with fully adaptive sample size. Test compatibility is achieved by using a sample space ordering that is derived from a test‐compatible estimator. The resulting confidence intervals tend to be conservative but assure the nominal coverage probability. In order to assess the possibility of further improving these confidence intervals, we pursue a direct optimization approach minimizing the mean width of the confidence intervals. While the latter approach produces more stable coverage probabilities, it is also slightly anti‐conservative and yields only negligible improvements in mean width. We conclude that the Clopper–Pearson‐type confidence intervals based on a test‐compatible estimator are the best choice if the nominal coverage probability is not to be undershot and compatibility of test decision and confidence interval is to be preserved.  相似文献   

2.
The aim of dose finding studies is sometimes to estimate parameters in a fitted model. The precision of the parameter estimates should be as high as possible. This can be obtained by increasing the number of subjects in the study, N, choosing a good and efficient estimation approach, and by designing the dose finding study in an optimal way. Increasing the number of subjects is not always feasible because of increasing cost, time limitations, etc. In this paper, we assume fixed N and consider estimation approaches and study designs for multiresponse dose finding studies. We work with diabetes dose–response data and compare a system estimation approach that fits a multiresponse Emax model to the data to equation‐by‐equation estimation that fits uniresponse Emax models to the data. We then derive some optimal designs for estimating the parameters in the multi‐ and uniresponse Emax model and study the efficiency of these designs.  相似文献   

3.
Robust and efficient design of experiments for the Monod model   总被引:1,自引:0,他引:1  
In this paper the problem of designing experiments for the Monod model, which is frequently used in microbiology, is studied. The model is defined implicitly by a differential equation and has numerous applications in microbial growth kinetics, environmental research, pharmacokinetics, and plant physiology. The designs presented so far in the literature are local optimal designs, which depend sensitively on a preliminary guess of the unknown parameters, and are for this reason in many cases not robust with respect to their misspecification. Uniform designs and maximin optimal designs are considered as a strategy to obtain robust and efficient designs for parameter estimation. In particular, standardized maximin D- and E-optimal designs are determined and compared with uniform designs, which are usually applied in these microbiological models. It is demonstrated that maximin optimal designs are substantially more efficient than uniform designs. Parameter variances can be decreased by a factor of two by simply sampling at optimal times during the experiment. Moreover, the maximin optimal designs usually provide the possibility for the experimenter to check the model assumptions, because they have more support points than parameters in the Monod model.  相似文献   

4.
Study planning often involves selecting an appropriate sample size. Power calculations require specifying an effect size and estimating “nuisance” parameters, e.g. the overall incidence of the outcome. For observational studies, an additional source of randomness must be estimated: the rate of the exposure. A poor estimate of any of these parameters will produce an erroneous sample size. Internal pilot (IP) designs reduce the risk of this error ‐ leading to better resource utilization ‐ by using revised estimates of the nuisance parameters at an interim stage to adjust the final sample size. In the clinical trials setting, where allocation to treatment groups is pre‐determined, IP designs have been shown to achieve the targeted power without introducing substantial inflation of the type I error rate. It has not been demonstrated whether the same general conclusions hold in observational studies, where exposure‐group membership cannot be controlled by the investigator. We extend the IP to observational settings. We demonstrate through simulations that implementing an IP, in which prevalence of the exposure can be re‐estimated at an interim stage, helps ensure optimal power for observational research with little inflation of the type I error associated with the final data analysis.  相似文献   

5.
Chromatography is an indispensable unit operation in the downstream processing of biomolecules. Scaling of chromatographic operations typically involves a significant increase in the column diameter. At this scale, the flow distribution within a packed bed could be severely affected by the distributor design in process scale columns. Different vendors offer process scale columns with varying design features. The effect of these design features on the flow distribution in packed beds and the resultant effect on column efficiency and cleanability needs to be properly understood in order to prevent unpleasant surprises on scale‐up. Computational Fluid Dynamics (CFD) provides a cost‐effective means to explore the effect of various distributor designs on process scale performance. In this work, we present a CFD tool that was developed and validated against experimental dye traces and tracer injections. Subsequently, the tool was employed to compare and contrast two commercially available header designs. © 2014 American Institute of Chemical Engineers Biotechnol. Prog., 30:837–844, 2014  相似文献   

6.
Ouwens MJ  Tan FE  Berger MP 《Biometrics》2002,58(4):735-741
In this article, the optimal selection and allocation of time points in repeated measures experiments is considered. D-optimal cohort designs are computed numerically for the first- and second-degree polynomial models with random intercept, random slope, and first-order autoregressive serial correlations. Because the optimal designs are locally optimal, it is proposed to use a maximin criterion. It is shown that, for a large class of symmetric designs, the smallest relative efficiency over the model parameter space is substantial.  相似文献   

7.
We propose a general method of designing an experiment when there are potentially failing trials. We use polynomial models and the Michaelis-Menten model as examples and construct different types of optimal designs under a broad class of response probability functions. We show that the usual optimal designs, that assume all observations are available at the end of the experiment, can be quite inefficient if the anticipated missingness pattern is not accounted for at the design stage. We also investigate robustness properties of the proposed designs to specification of their nominal values and the response probability functions.  相似文献   

8.
Sequential designs for phase I clinical trials which incorporate maximum likelihood estimates (MLE) as data accrue are inherently problematic because of limited data for estimation early on. We address this problem for small phase I clinical trials with ordinal responses. In particular, we explore the problem of the nonexistence of the MLE of the logistic parameters under a proportional odds model with one predictor. We incorporate the probability of an undetermined MLE as a restriction, as well as ethical considerations, into a proposed sequential optimal approach, which consists of a start‐up design, a follow‐on design and a sequential dose‐finding design. Comparisons with nonparametric sequential designs are also performed based on simulation studies with parameters drawn from a real data set.  相似文献   

9.
Recently, in order to accelerate drug development, trials that use adaptive seamless designs such as phase II/III clinical trials have been proposed. Phase II/III clinical trials combine traditional phases II and III into a single trial that is conducted in two stages. Using stage 1 data, an interim analysis is performed to answer phase II objectives and after collection of stage 2 data, a final confirmatory analysis is performed to answer phase III objectives. In this paper we consider phase II/III clinical trials in which, at stage 1, several experimental treatments are compared to a control and the apparently most effective experimental treatment is selected to continue to stage 2. Although these trials are attractive because the confirmatory analysis includes phase II data from stage 1, the inference methods used for trials that compare a single experimental treatment to a control and do not have an interim analysis are no longer appropriate. Several methods for analysing phase II/III clinical trials have been developed. These methods are recent and so there is little literature on extensive comparisons of their characteristics. In this paper we review and compare the various methods available for constructing confidence intervals after phase II/III clinical trials.  相似文献   

10.
Adaptive two‐stage designs allow a data‐driven change of design characteristics during the ongoing trial. One of the available options is an adaptive choice of the test statistic for the second stage of the trial based on the results of the interim analysis. Since there is often only a vague knowledge of the distribution shape of the primary endpoint in the planning phase of a study, a change of the test statistic may then be considered if the data indicate that the assumptions underlying the initial choice of the test are not correct. Collings and Hamilton proposed a bootstrap method for the estimation of the power of the two‐sample Wilcoxon test for shift alternatives. We use this approach for the selection of the test statistic. By means of a simulation study, we show that the gain in terms of power may be considerable when the initial assumption about the underlying distribution was wrong, whereas the loss is relatively small when in the first instance the optimal test statistic was chosen. The results also hold true for comparison with a one‐stage design. Application of the method is illustrated by a clinical trial example.  相似文献   

11.
Mass balance of a glacier is an accepted measure of how much mass a glacier gains or loses. In theory, it is typically computed by integral functional and empirically, it is approximated by arithmetic mean. However, the variability of such an approach was not studied satisfactory yet. In this paper we provide a dynamical system of mass balance measurements under the constrains of 2nd order model with exponentially decreasing covariance. We also provide locations of optimal measurements, so called designs. We study Ornstein–Uhlenbeck (OU) processes and sheets with linear drifts and introduce K optimal designs in the correlated processes setup. We provide a thorough comparison of equidistant, Latin Hypercube Samples (LHS), and factorial designs for D- and K-optimality as well as the variance. We show differences between these criteria and discuss the role of equidistant designs for the correlated process. In particular, applications to estimation of mass balance of Olivares Alfa and Beta glaciers in Chile is investigated showing that simple application of full raster design and kriging based on inter- and extrapolation of points can lead to increased variance. We also show how the removal of certain measurement points may increase the quality of the melting assessment while decreasing costs. Blow-ups of solutions of dynamical systems underline the empirically observed fact that in a homogenous glaciers around 11 well-positioned stakes suffices for mass balance measurement.  相似文献   

12.
Understanding the forces that shape eco‐evolutionary patterns often requires linking phenotypes to genotypes, allowing characterization of these patterns at the molecular level. DNA‐based markers are less informative in this aim compared to markers associated with gene expression and, more specifically, with protein quantities. The characterization of eco‐evolutionary patterns also usually requires the analysis of large sample sizes to accurately estimate interindividual variability. However, the methods used to characterize and compare protein samples are generally expensive and time‐consuming, which constrains the size of the produced data sets to few individuals. We present here a method that estimates the interindividual variability of protein quantities based on a global, semi‐automatic analysis of 1D electrophoretic profiles, opening the way to rapid analysis and comparison of hundreds of individuals. The main original features of the method are the in silico normalization of sample protein quantities using pictures of electrophoresis gels at different staining levels, as well as a new method of analysis of electrophoretic profiles based on a median profile. We demonstrate that this method can accurately discriminate between species and between geographically distant or close populations, based on interindividual variation in venom protein profiles from three endoparasitoid wasps of two different genera (Psyttalia concolor, Psyttalia lounsburyi and Leptopilina boulardi). Finally, we discuss the experimental designs that would benefit from the use of this method.  相似文献   

13.
A class of “incomplete” multivariate polynomial regression models on the q‐cube is considered. A simple algorithm for the determination of D‐optimal product designs is developed and illustrated by specific examples. The methods are implemented on an IBM compatible PC under MS‐DOS and provide an effective method for the numerical solution of an optimal design problem, which was unsolved in nearly all cases of practical interest.  相似文献   

14.
Many variables and their interactions can affect a biotechnological process. Testing a large number of variables and all their possible interactions is a cumbersome task and its cost can be prohibitive. Several screening strategies, with a relatively low number of experiments, can be used to find which variables have the largest impact on the process and estimate the magnitude of their effect. One approach for process screening is the use of experimental designs, among which fractional factorial and Plackett–Burman designs are frequent choices. Other screening strategies involve the use of artificial neural networks (ANNs). The advantage of ANNs is that they have fewer assumptions than experimental designs, but they render black-box models (i.e., little information can be extracted about the process mechanics). In this paper, we simulate a biotechnological process (fed-batch growth of bakers yeast) to analyze and compare the effect of random experimental errors of different magnitudes and statistical distributions on experimental designs and ANNs. Except for the situation in which the error has a normal distribution and the standard deviation is constant, it was not possible to determine a clear-cut rule for favoring one screening strategy over the other. Instead, we found that the data can be better analyzed using both strategies simultaneously.  相似文献   

15.
As phylogenetically controlled experimental designs become increasingly common in ecology, the need arises for a standardized statistical treatment of these datasets. Phylogenetically paired designs circumvent the need for resolved phylogenies and have been used to compare species groups, particularly in the areas of invasion biology and adaptation. Despite the widespread use of this approach, the statistical analysis of paired designs has not been critically evaluated. We propose a mixed model approach that includes random effects for pair and species. These random effects introduce a “two-layer” compound symmetry variance structure that captures both the correlations between observations on related species within a pair as well as the correlations between the repeated measurements within species. We conducted a simulation study to assess the effect of model misspecification on Type I and II error rates. We also provide an illustrative example with data containing taxonomically similar species and several outcome variables of interest. We found that a mixed model with species and pair as random effects performed better in these phylogenetically explicit simulations than two commonly used reference models (no or single random effect) by optimizing Type I error rates and power. The proposed mixed model produces acceptable Type I and II error rates despite the absence of a phylogenetic tree. This design can be generalized to a variety of datasets to analyze repeated measurements in clusters of related subjects/species.  相似文献   

16.
Binomial group testing involves pooling individuals into groups and observing a binary response on each group. Results from the group tests can then be used to draw inference about population proportions. Its use as an experimental design has received much attention in recent years, especially in public‐health screening experiments and vector‐transfer designs in plant pathology. We investigate the benefits of group testing in situations wherein one desires to test whether or not probabilities are increasingly ordered across the levels of an observed qualitative covariate, i.e., across strata of a population or among treatment levels. We use a known likelihood ratio test for individual testing, but extend its use to group‐testing situations to show the increases in power conferred by using group testing when operating in this constrained parameter space. We apply our methods to data from an HIV study involving male subjects classified as intraveneous drug users.  相似文献   

17.
Molecular markers produced by next‐generation sequencing (NGS) technologies are revolutionizing genetic research. However, the costs of analysing large numbers of individual genomes remain prohibitive for most population genetics studies. Here, we present results based on mathematical derivations showing that, under many realistic experimental designs, NGS of DNA pools from diploid individuals allows to estimate the allele frequencies at single nucleotide polymorphisms (SNPs) with at least the same accuracy as individual‐based analyses, for considerably lower library construction and sequencing efforts. These findings remain true when taking into account the possibility of substantially unequal contributions of each individual to the final pool of sequence reads. We propose the intuitive notion of effective pool size to account for unequal pooling and derive a Bayesian hierarchical model to estimate this parameter directly from the data. We provide a user‐friendly application assessing the accuracy of allele frequency estimation from both pool‐ and individual‐based NGS population data under various sampling, sequencing depth and experimental error designs. We illustrate our findings with theoretical examples and real data sets corresponding to SNP loci obtained using restriction site–associated DNA (RAD) sequencing in pool‐ and individual‐based experiments carried out on the same population of the pine processionary moth (Thaumetopoea pityocampa). NGS of DNA pools might not be optimal for all types of studies but provides a cost‐effective approach for estimating allele frequencies for very large numbers of SNPs. It thus allows comparison of genome‐wide patterns of genetic variation for large numbers of individuals in multiple populations.  相似文献   

18.
Due to increasing discoveries of biomarkers and observed diversity among patients, there is growing interest in personalized medicine for the purpose of increasing the well‐being of patients (ethics) and extending human life. In fact, these biomarkers and observed heterogeneity among patients are useful covariates that can be used to achieve the ethical goals of clinical trials and improving the efficiency of statistical inference. Covariate‐adjusted response‐adaptive (CARA) design was developed to use information in such covariates in randomization to maximize the well‐being of participating patients as well as increase the efficiency of statistical inference at the end of a clinical trial. In this paper, we establish conditions for consistency and asymptotic normality of maximum likelihood (ML) estimators of generalized linear models (GLM) for a general class of adaptive designs. We prove that the ML estimators are consistent and asymptotically follow a multivariate Gaussian distribution. The efficiency of the estimators and the performance of response‐adaptive (RA), CARA, and completely randomized (CR) designs are examined based on the well‐being of patients under a logit model with categorical covariates. Results from our simulation studies and application to data from a clinical trial on stroke prevention in atrial fibrillation (SPAF) show that RA designs lead to ethically desirable outcomes as well as higher statistical efficiency compared to CARA designs if there is no treatment by covariate interaction in an ideal model. CARA designs were however more ethical than RA designs when there was significant interaction.  相似文献   

19.
Early generation variety trials are very important in plant and tree breeding programs. Typically many entries are tested, often with very little resources available. Unreplicated trials using control plots are popular and it is common to repeat the trials at a number of locations. An alternative is to use p-rep designs, where a proportion of the test entries are replicated at each location; this can obviate the need for control plots. α-Designs are commonly used for replicated variety trials and we show how these can be adapted to produce efficient p-rep designs.  相似文献   

20.
This work shows that, during MD aided by external tiny random forces, 3‐bromo‐4‐hydroxybenzoic acid (LHB), the product of reductive dehalogenation of 3,5‐dibromo‐4‐hydroxybenzoic acid (LBB) by the corrin‐based marine enzyme NpRdhA, is expelled along mainly the wide channel that connects the corrin to the external medium. In accordance, unbiased MD showed that LBB migrates relatively rapidly from the external medium to the inside of the channel, finally getting to the corrin active center of NpRdhA. The LBB pose, with bromide head and carboxylate tail nearly equidistant from the corrin Co ion, does not fit the results of previous automatic docking. Either the experimental structure of the NpRdhA‐LBB complex, or a quantum‐mechanical study of LBB at the corrin active site, are therefore urged.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号