首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Generalized cyclic designs in factorial experiments   总被引:1,自引:0,他引:1  
JOHN  J. A. 《Biometrika》1973,60(1):55-63
  相似文献   

2.
    
Lin Y  Shih WJ 《Biometrics》2004,60(2):482-490
The main purpose of a phase IIA trial of a new anticancer therapy is to determine whether the therapy has sufficient promise against a specific type of tumor to warrant its further development. The therapy will be rejected for further investigation if the true response rate is less than some uninteresting level and the test of hypothesis is powered at a specific target response rate. Two-stage designs are commonly used for this situation. However, in many situations investigators often express concern about uncertainty in targeting the alternative hypothesis to study power at the planning stage. In this article, motivated by a real example, we propose a strategy for adaptive two-stage designs that will use the information at the first stage of the study to either reject the therapy or continue testing with either an optimistic or a skeptic target response rate, while the type I error rate is controlled. We also introduce new optimal criteria to reduce the expected total sample size.  相似文献   

3.
4.
    
McNamee R 《Biometrics》2004,60(3):783-792
Two-phase designs for estimation of prevalence, where the first-phase classification is fallible and the second is accurate but relatively expensive, are not necessarily justified on efficiency grounds. However, they might be advantageous for dual-purpose studies, for example where prevalence estimation is followed by a clinical trial or case-control study, if they can identify cases of disease for the second study in a cost-effective way. Alternatively, they may be justified on ethical grounds if they can identify more, previously undetected but treatable cases of disease, than a simple random sample design. An approach to sampling is proposed, which formally combines the goals of efficient prevalence estimation and case detection by setting different notional study costs for investigating cases and noncases. Two variants of the method are compared with an \"ethical\" two-phase scheme proposed by Shrout and Newman (1989, Biometrics 45, 549-555), and with the most efficient scheme for prevalence estimation alone, in terms of the standard error of the prevalence estimate, the expected number of cases, and the fraction of cases among second-phase subjects, given a fixed budget. One variant yields the highest fraction and expected number of cases but also the largest standard errors. The other yields a higher fraction than Shrout and Newman's scheme and a similar number of cases but appears to do so more efficiently.  相似文献   

5.
    
Rosner B  Glynn RJ  Lee ML 《Biometrics》2006,62(4):1251-1259
The Wilcoxon rank sum test is widely used for two-group comparisons for nonnormal data. An assumption of this test is independence of sampling units both between and within groups. In ophthalmology, data are often collected on two eyes of an individual, which are highly correlated. In ophthalmological clinical trials, randomization is usually performed at the subject level, but the unit of analysis is the eye. If the eye is used as the unit of analysis, then a modification to the usual Wilcoxon rank sum variance formula must be made to account for the within-cluster dependence. For some clustered data designs, where the unit of analysis is the subunit, group membership may be defined at the subunit level. For example, in some randomized ophthalmologic clinical trials, different treatments may be applied to fellow eyes of some patients, while the same treatment may be applied to fellow eyes of other patients. In general, binary eye-specific covariates may be present (scored as exposed or unexposed) and one wishes to compare nonnormally distributed outcomes between exposed and unexposed eyes using the Wilcoxon rank sum test while accounting for the clustering. In this article, we present a corrected variance formula for the Wilcoxon rank sum statistic in the setting of eye (subunit)-specific covariates. We apply it to compare ocular itching scores in ocular allergy patients between eyes treated with active versus placebo eye drops, where some patients receive the same eye drop in both eyes, while other patients receive different eye drops in fellow eyes. We also present comparisons between the clustered Wilcoxon test and each of the signed rank tests and mixed model approaches and show dramatic differences in power in favor of the clustered Wilcoxon test for some designs.  相似文献   

6.
Process optimisation techniques increasingly need to be used early on in research and development of processes for new ingredients. There are different approaches and this article illustrates the main issues at stake with a method that is an industry best practice, the Taguchi method, suggesting a procedure to assess the potential impact of its drawbacks. The Taguchi method has been widely used in various industrial sectors because it minimises the experimental requirements to define an optimum region of operation, which is particularly relevant when minimising variability is a target. However, it also has drawbacks, especially the intricate confoundings generated by the experimental designs used. This work reports a process optimisation of the synthesis of red pigments by a fungal strain, Talaromyces spp. using the Taguchi methodology and proposes an approach to assess from validation trials whether the conclusions can be accepted with confidence. The work focused on optimising the inoculum characteristics, and the studied factors were spore age and concentration, agitation speed and incubation time. It was concluded that spore age was the most important factor for both responses, with optimum results at 5 days old, with the best other conditions being spores concentration, 100,000 (spores/mL); agitation, 200 rpm; and incubation time, 84 h. The interactive effects can be considered negligible and therefore this is an example where a simple experimental design approach was successful in speedily indicating conditions able to increase pigment production by 63% compared to an average choice of settings. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:621–632, 2017  相似文献   

7.
    
Franklin and Bailey (1977) provided an algorithm for construction of fractional factorial designs for estimating a user specified set of factorial effects. Their algorithm is based on a backtrack procedure. This is computer intensive when the number of factors is not small. We propose a stochastic search method called SEF (sequential elimination of factors) algorithm. The SEF algorithm is a simple modification of the exhaustive approach of the Franklin-Bailey algorithm since defining contrasts for the design of interest are chosen stochastically rather than choosing them in a systematic and exhaustive manner. Our experience shows the probability of success of obtaining a required design to be sufficiently large to make this a practical approach. The success probability may be expected to be rather small if the required design is close to a saturated design. We suggest the use of this stochastic alternative particularly when the number of factors is large. This can offer substantial savings in computing time relative to an exhaustive approach. Moreover, if the SEF algorithm fails to produce a design even after several attempts, one can always revert back to the Franklin-Bailey approach.  相似文献   

8.
Extreme Vertices designs were developed by MCLEAN and ANDERSON (1966) for situations where components of a mixture are restricted by lower and upper bounds, SNEE and MARQUARDT (1974) and SNEE (1975) gave algorithms to construct optimum designs in these situations. SAXENA and NIGAM (1975) evolved a transformation which provides designs for restricted exploration using Symmetric Simplex designs. In this paper a procedure has been given which provides alternative designs with uniform exploration in constrained mixture experiments. The procedure is illustrated by an example.  相似文献   

9.
    
Comparative experiments involve the allocation of treatments to units, ideally by randomization. This necessarily confounds treatment information with unit information, which we distinguish from the other forms of information blending, in particular aliasing and marginality. We outline a factor-allocation paradigm for describing experimental designs with the aim of (i) exhibiting the confounding in a design, using analysis-of-variance-like tables, so as to understand and evaluate the design and (ii) formulating a linear mixed model based on the factor allocation that the design involves. The approach exhibits the dispersal of treatments information between units sources, allows designers a choice in the strategy that they adopt for including block-treatment interactions, clarifies differences between experiments, accommodates systematic allocation of factors, and provides a consolidated analysis of nonorthogonal designs. It provides insights into the process of designing experiments and issues that commonly arise with designs. The paradigm has pedagogical advantages and is implemented using the R package dae .  相似文献   

10.
    
Experimental design applications for discriminating between models have been hampered by the assumption to know beforehand which model is the true one, which is counter to the very aim of the experiment. Previous approaches to alleviate this requirement were either symmetrizations of asymmetric techniques, or Bayesian, minimax, and sequential approaches. Here we present a genuinely symmetric criterion based on a linearized distance between mean-value surfaces and the newly introduced tool of flexible nominal sets. We demonstrate the computational efficiency of the approach using the proposed criterion and provide a Monte-Carlo evaluation of its discrimination performance on the basis of the likelihood ratio. An application for a pair of competing models in enzyme kinetics is given.  相似文献   

11.
12.
13.
A method of constructing balanced and partially balanced ternary designs from balanced and partially balanced incomplete block designs, respectively, and two methods of constructing partially balanced ternary designs from association schemes are obtained. Two new and efficient balanced ternary designs having K < V and R ≦ 20 are obtained by the first method.  相似文献   

14.
In this paper systematic designs for experiments with mixtures are developed. The plan of analysis of the experiment is discussed for a quadratic model of SCHEFF É (1958) for q-component mixture with orthogonal polynomials of third degree describing the time trends.  相似文献   

15.
  总被引:2,自引:0,他引:2  
Horton NJ  Laird NM 《Biometrics》2001,57(1):34-42
This article presents a new method for maximum likelihood estimation of logistic regression models with incomplete covariate data where auxiliary information is available. This auxiliary information is extraneous to the regression model of interest but predictive of the covariate with missing data. Ibrahim (1990, Journal of the American Statistical Association 85, 765-769) provides a general method for estimating generalized linear regression models with missing covariates using the EM algorithm that is easily implemented when there is no auxiliary data. Vach (1997, Statistics in Medicine 16, 57-72) describes how the method can be extended when the outcome and auxiliary data are conditionally independent given the covariates in the model. The method allows the incorporation of auxiliary data without making the conditional independence assumption. We suggest tests of conditional independence and compare the performance of several estimators in an example concerning mental health service utilization in children. Using an artificial dataset, we compare the performance of several estimators when auxiliary data are available.  相似文献   

16.
    
A fast and efficient preparative HPLC-PDA method was developed for the separation and isolation of four rare isomeric kaempferol diglycosides from leaves of Prunus spinosa L. The separation procedure of the enriched diglycoside fraction of the 70% (v/v) aqueous methanolic leaf extract was first optimised on analytical XBridge C18 column (100 mm × 4.6 mm i.d., 5 μm) and central composite design combined with response surface methodology was utilized to establish the optimal separation conditions. The developed method was directly transferred to preparative XBridge Prep C18 column (100 mm × 19 mm i.d., 5 μm) and the final separation was accomplished by isocratic elution with 0.5% acetic acid-methanol-tetrahydrofuran (75.2:16.6:8.2, v/v/v) as the mobile phase, at a flow rate of 13.6 mL/min, in less than 12 min for a single run. Under these conditions, four flavonoid diglycosides: kaempferol 3-O-α-l-arabinofuranoside-7-O-α-l-rhamnopyranoside, kaempferol 3,7-di-O-α-l-rhamnopyranoside (kaempferitrin), and reported for the first time for P. spinosa kaempferol 3-O-β-d-xylopyranoside-7-O-α-l-rhamnopyranoside (lepidoside) and kaempferol 3-O-α-l-arabinopyranoside-7-O-α-l-rhamnopyranoside, were isolated in high separation yield (84.8–94.5%) and purity (92.45–99.79%). Their structures were confirmed by extensive 1D and 2D NMR studies. Additionally, the UHPLC-PDA-ESI–MS3 qualitative profiling led to the identification of twenty-one phenolic compounds and confirmed that the isolates were the major components of the leaf material.  相似文献   

17.
18.
19.
20.
    
Characterization of purification processes by identifying significant input parameters and establishing predictive models is vital to developing robust processes. Current experimental design approaches restrict analysis to one process step at a time, which can severely limit the ability to identify interactions between process steps. This can be overcome by the use of partition designs which can model multiple, sequential process steps simultaneously. This paper presents the application of partition designs to a monoclonal antibody purification process. Three sequential purification steps were modeled using both traditional experimental designs and partition designs and the results compared as a proof of concept study. The partition and traditional design approaches identified the same input parameters within each process step that significantly affected the product quality output examined. The partition design also identified significant interactions between input parameters across process steps that could not be uncovered by the traditional approach. Biotechnol. Bioeng. 2010;107: 814–824. © 2010 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号