首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 515 毫秒
1.
Sequential designs for phase I clinical trials which incorporate maximum likelihood estimates (MLE) as data accrue are inherently problematic because of limited data for estimation early on. We address this problem for small phase I clinical trials with ordinal responses. In particular, we explore the problem of the nonexistence of the MLE of the logistic parameters under a proportional odds model with one predictor. We incorporate the probability of an undetermined MLE as a restriction, as well as ethical considerations, into a proposed sequential optimal approach, which consists of a start‐up design, a follow‐on design and a sequential dose‐finding design. Comparisons with nonparametric sequential designs are also performed based on simulation studies with parameters drawn from a real data set.  相似文献   

2.
Most biotechnology unit operations are complex in nature with numerous process variables, feed material attributes, and raw material attributes that can have significant impact on the performance of the process. Design of experiments (DOE)‐based approach offers a solution to this conundrum and allows for an efficient estimation of the main effects and the interactions with minimal number of experiments. Numerous publications illustrate application of DOE towards development of different bioprocessing unit operations. However, a systematic approach for evaluation of the different DOE designs and for choosing the optimal design for a given application has not been published yet. Through this work we have compared the I‐optimal and D‐optimal designs to the commonly used central composite and Box–Behnken designs for bioprocess applications. A systematic methodology is proposed for construction of the model and for precise prediction of the responses for the three case studies involving some of the commonly used unit operations in downstream processing. Use of Akaike information criterion for model selection has been examined and found to be suitable for the applications under consideration. © 2013 American Institute of Chemical Engineers Biotechnol. Prog., 30:86–99, 2014  相似文献   

3.
《MABS-AUSTIN》2013,5(4):1094-1102
The objectives of this retrospective analysis were (1) to characterize the population pharmacokinetics (popPK) of four different monoclonal antibodies (mAbs) in a combined analysis of individual data collected during first-in-human (FIH) studies and (2) to provide a scientific rationale for prospective design of FIH studies with mAbs. The data set was composed of 171 subjects contributing a total of 2716 mAb serum concentrations, following intravenous (IV) and subcutaneous (SC) doses. mAb PK was described by an open 2-compartment model with first-order elimination from the central compartment and a depot compartment with first-order absorption. Parameter values obtained from the popPK model were further used to generate optimal sampling times for a single dose study. A robust fit to the combined data from four mAbs was obtained using the 2-compartment model. Population parameter estimates for systemic clearance and central volume of distribution were 0.20 L/day and 3.6 L with intersubject variability of 31% and 34%, respectively. The random residual error was 14%. Differences (> 2-fold) in PK parameters were not apparent across mAbs. Rich designs (22 samples/subject), minimal designs for popPK (5 samples/subject), and optimal designs for non-compartmental analysis (NCA) and popPK (10 samples/subject) were examined by stochastic simulation and estimation. Single-dose PK studies for linear mAbs executed using the optimal designs are expected to yield high-quality model estimates, and accurate capture of NCA estimations. This model-based meta-analysis has determined typical popPK values for four mAbs with linear elimination and enabled prospective optimization of FIH study designs, potentially improving the efficiency of FIH studies for this class of therapeutics.  相似文献   

4.
The objectives of this retrospective analysis were (1) to characterize the population pharmacokinetics (popPK) of four different monoclonal antibodies (mAbs) in a combined analysis of individual data collected during first-in-human (FIH) studies and (2) to provide a scientific rationale for prospective design of FIH studies with mAbs. The data set was composed of 171 subjects contributing a total of 2716 mAb serum concentrations, following intravenous (IV) and subcutaneous (SC) doses. mAb PK was described by an open 2-compartment model with first-order elimination from the central compartment and a depot compartment with first-order absorption. Parameter values obtained from the popPK model were further used to generate optimal sampling times for a single dose study. A robust fit to the combined data from four mAbs was obtained using the 2-compartment model. Population parameter estimates for systemic clearance and central volume of distribution were 0.20 L/day and 3.6 L with intersubject variability of 31% and 34%, respectively. The random residual error was 14%. Differences (> 2-fold) in PK parameters were not apparent across mAbs. Rich designs (22 samples/subject), minimal designs for popPK (5 samples/subject), and optimal designs for non-compartmental analysis (NCA) and popPK (10 samples/subject) were examined by stochastic simulation and estimation. Single-dose PK studies for linear mAbs executed using the optimal designs are expected to yield high-quality model estimates, and accurate capture of NCA estimations. This model-based meta-analysis has determined typical popPK values for four mAbs with linear elimination and enabled prospective optimization of FIH study designs, potentially improving the efficiency of FIH studies for this class of therapeutics.  相似文献   

5.
The problem of finding confidence regions for multiple predictor variables corresponding to given expected values of a response variable has not been adequately resolved. Motivated by an example from a study on hyperbaric exposure using a logistic regression model, we develop a conceptual framework for the estimation of the multi-dimensional effective dose for binary outcomes. The k -dimensional effective dose can be determined by conditioning on k - 1 components and solving for the last component as a conditional univariate effective dose. We consider various approaches for calculating confidence regions for the multi-dimensional effective dose and compare them via a simulation study for a range of possible designs. We analyze data related to decompression sickness to illustrate our procedure. Our results provide a practical approach to finding confidence regions for predictor variables for a given response value.  相似文献   

6.
Aims: To develop time‐dependent dose–response models for highly pathogenic avian influenza A (HPAI) of the H5N1 subtype virus. Methods and Results: A total of four candidate time‐dependent dose–response models were fitted to four survival data sets for animals (mice or ferrets) exposed to graded doses of HPAI H5N1 virus using the maximum‐likelihood estimation. A beta‐Poisson dose–response model with the N50 parameter modified by an exponential‐inverse‐power time dependency or an exponential dose–response model with the k parameter modified by an exponential‐inverse time dependency provided a statistically adequate fit to the observed survival data. Conclusions: We have successfully developed the time‐dependent dose–response models to describe the mortality of animals exposed to an HPAI H5N1 virus. The developed model describes the mortality over time and represents observed experimental responses accurately. Significance and Impact of the Study: This is the first study describing time‐dependent dose–response models for HPAI H5N1 virus. The developed models will be a useful tool for estimating the mortality of HPAI H5N1 virus, which may depend on time postexposure, for the preparation of a future influenza pandemic caused by this lethal virus.  相似文献   

7.
Zhou X  Joseph L  Wolfson DB  Bélisle P 《Biometrics》2003,59(4):1082-1088
Summary . Suppose that the true model underlying a set of data is one of a finite set of candidate models, and that parameter estimation for this model is of primary interest. With this goal, optimal design must depend on a loss function across all possible models. A common method that accounts for model uncertainty is to average the loss over all models; this is the basis of what is known as Läuter's criterion. We generalize Läuter's criterion and show that it can be placed in a Bayesian decision theoretic framework, by extending the definition of Bayesian A‐optimality. We use this generalized A‐optimality to find optimal design points in an environmental safety setting. In estimating the smallest detectable trace limit in a water contamination problem, we obtain optimal designs that are quite different from those suggested by standard A‐optimality.  相似文献   

8.
A lipase from Candida cylindracea immobilized by adsorption on microporous polypropylene fibers was used to selectively hydrolyze the saturated and monounsaturated fatty acid residues of menhaden oil at 40 degrees C and pH 7.0. At a space time of 3.5 h, the shell and tube reactor containing these hollow fibers gives a fractional release of each of the saturated and monounsaturated fatty acid residues (i.e., C14, C16, C16:1, C18:1) of ca. 88% of the corresponding possible asymptotic value. The corresponding coproduct glycerides retained over 90% of the initial residues of both eicosapentaenoic (EPA; C20:5) and docosahexaenoic (DHA; C22:6) acids. The half-life of the immobilized lipase was 170 h when the reactor was operated at the indicated (optimum) conditions. Rate expressions associated with a generic ping-pong bi-bi mechanism were used to fit the experimental data for the lipase catalyzed reaction. Both uni- and multiresponse nonlinear regression methods were employed to determine the kinetic parameters associated with these rate expressions. The best statistical fit of the uniresponse data was obtained for a rate expression, which is formally equivalent to a general Michaelis-Menten mechanism. After reparameterization, this rate expression reduced to a pseudo-first-order model. For the multiresponse analysis, a model that employed a normal distribution of the ratio of Vmax/Km with respect to the chain length of the fatty acid residues provided the best statistical fit of the experimental data.  相似文献   

9.
The utility of clinical trial designs with adaptive patient enrichment is investigated in an adequate and well‐controlled trial setting. The overall treatment effect is the weighted average of the treatment effects in the mutually exclusive subsets of the originally intended entire study population. The adaptive enrichment approaches permit assessment of treatment effect that may be applicable to specific nested patient (sub)sets due to heterogeneous patient characteristics and/or differential response to treatment, e.g. a responsive patient subset versus a lack of beneficial patient subset, in all patient (sub)sets studied. The adaptive enrichment approaches considered include three adaptive design scenarios: (i) total sample size fixed and with futility stopping, (ii) sample size adaptation and futility stopping, and (iii) sample size adaptation without futility stopping. We show that regardless of whether the treatment effect eventually assessed is applicable to the originally studied patient population or only to the nested patient subsets; it is possible to devise an adaptive enrichment approach that statistically outperforms one‐size‐fits‐all fixed design approach and the fixed design with a pre‐specified multiple test procedure. We emphasize the need of additional studies to replicate the finding of a treatment effect in an enriched patient subset. The replication studies are likely to need fewer number of patients because of an identified treatment effect size that is larger than the diluted overall effect size. The adaptive designs, when applicable, are along the line of efficiency consideration in a drug development program.  相似文献   

10.
Summary The two‐stage case–control design has been widely used in epidemiology studies for its cost‐effectiveness and improvement of the study efficiency ( White, 1982 , American Journal of Epidemiology 115, 119–128; Breslow and Cain, 1988 , Biometrika 75, 11–20). The evolution of modern biomedical studies has called for cost‐effective designs with a continuous outcome and exposure variables. In this article, we propose a new two‐stage outcome‐dependent sampling (ODS) scheme with a continuous outcome variable, where both the first‐stage data and the second‐stage data are from ODS schemes. We develop a semiparametric empirical likelihood estimation for inference about the regression parameters in the proposed design. Simulation studies were conducted to investigate the small‐sample behavior of the proposed estimator. We demonstrate that, for a given statistical power, the proposed design will require a substantially smaller sample size than the alternative designs. The proposed method is illustrated with an environmental health study conducted at National Institutes of Health.  相似文献   

11.
Missing outcomes or irregularly timed multivariate longitudinal data frequently occur in clinical trials or biomedical studies. The multivariate t linear mixed model (MtLMM) has been shown to be a robust approach to modeling multioutcome continuous repeated measures in the presence of outliers or heavy‐tailed noises. This paper presents a framework for fitting the MtLMM with an arbitrary missing data pattern embodied within multiple outcome variables recorded at irregular occasions. To address the serial correlation among the within‐subject errors, a damped exponential correlation structure is considered in the model. Under the missing at random mechanism, an efficient alternating expectation‐conditional maximization (AECM) algorithm is used to carry out estimation of parameters and imputation of missing values. The techniques for the estimation of random effects and the prediction of future responses are also investigated. Applications to an HIV‐AIDS study and a pregnancy study involving analysis of multivariate longitudinal data with missing outcomes as well as a simulation study have highlighted the superiority of MtLMMs on the provision of more adequate estimation, imputation and prediction performances.  相似文献   

12.
Molecular markers produced by next‐generation sequencing (NGS) technologies are revolutionizing genetic research. However, the costs of analysing large numbers of individual genomes remain prohibitive for most population genetics studies. Here, we present results based on mathematical derivations showing that, under many realistic experimental designs, NGS of DNA pools from diploid individuals allows to estimate the allele frequencies at single nucleotide polymorphisms (SNPs) with at least the same accuracy as individual‐based analyses, for considerably lower library construction and sequencing efforts. These findings remain true when taking into account the possibility of substantially unequal contributions of each individual to the final pool of sequence reads. We propose the intuitive notion of effective pool size to account for unequal pooling and derive a Bayesian hierarchical model to estimate this parameter directly from the data. We provide a user‐friendly application assessing the accuracy of allele frequency estimation from both pool‐ and individual‐based NGS population data under various sampling, sequencing depth and experimental error designs. We illustrate our findings with theoretical examples and real data sets corresponding to SNP loci obtained using restriction site–associated DNA (RAD) sequencing in pool‐ and individual‐based experiments carried out on the same population of the pine processionary moth (Thaumetopoea pityocampa). NGS of DNA pools might not be optimal for all types of studies but provides a cost‐effective approach for estimating allele frequencies for very large numbers of SNPs. It thus allows comparison of genome‐wide patterns of genetic variation for large numbers of individuals in multiple populations.  相似文献   

13.
A mixture of multivariate contaminated normal distributions is developed for model‐based clustering. In addition to the parameters of the classical normal mixture, our contaminated mixture has, for each cluster, a parameter controlling the proportion of mild outliers and one specifying the degree of contamination. Crucially, these parameters do not have to be specified a priori, adding a flexibility to our approach. Parsimony is introduced via eigen‐decomposition of the component covariance matrices, and sufficient conditions for the identifiability of all the members of the resulting family are provided. An expectation‐conditional maximization algorithm is outlined for parameter estimation and various implementation issues are discussed. Using a large‐scale simulation study, the behavior of the proposed approach is investigated and comparison with well‐established finite mixtures is provided. The performance of this novel family of models is also illustrated on artificial and real data.  相似文献   

14.
It is assumed that a known, correct, linear regression model (model I) is given. Let the problem be based on a Bayesian estimation of the regression parameter so that any available a priori information regarding this parameter can be used. This Bayesian estimation is, squared loss, an optimal strategy for the overall problem, which is divided into an estimation and a design problem. For practical reasons, the effort involved in performing the experiment will be taken into account as costs. In other words, the experimental design must result in the greatest possible accuracy for a given total cost (restriction of the sample size n). The linear cost function k(x) = 1 + c (x - a)/(b - a) is used to construct costoptimal experimental designs for simple linear regression by means of V = H = [a, b] in a way similar to that used for classical optimality criteria. The complicated structures of these designs and the difficulty in determining them by a direct approach have made it appear advisable to describe an iterative procedure for the construction of cost-optimal designs.  相似文献   

15.
This article studies evolutionary game dynamics in Wright's infinite island model. I study a general n×n matrix game and derive a basic equation that describes the change in frequency of strategies. A close observation of this equation reveals that three distinct effects are at work: direct benefit to a focal individual, kin‐selected indirect benefit to the focal individual via its relatives, and the cost caused by increased kin competition in the focal individual's natal deme. Crucial parameters are the coefficient of relatedness between two individuals and its analogue for three individuals. I provide a number of examples and show when the traditional inclusive fitness measure is recovered and when not. Results demonstrate how evolutionary game theory fits into the framework of kin selection.  相似文献   

16.
A broad approach to the design of Phase I clinical trials for the efficient estimation of the maximum tolerated dose is presented. The method is rooted in formal optimal design theory and involves the construction of constrained Bayesian c- and D-optimal designs. The imposed constraint incorporates the optimal design points and their weights and ensures that the probability that an administered dose exceeds the maximum acceptable dose is low. Results relating to these constrained designs for log doses on the real line are described and the associated equivalence theorem is given. The ideas are extended to more practical situations, specifically to those involving discrete doses. In particular, a Bayesian sequential optimal design scheme comprising a pilot study on a small number of patients followed by the allocation of patients to doses one at a time is developed and its properties explored by simulation.  相似文献   

17.
We describe a non-parametric optimal design as a theoretical gold standard for dose finding studies. Its purpose is analogous to the Cramer-Rao bound for unbiased estimators, i.e. it provides a bound beyond which improvements are not generally possible. The bound applies to the class of non-parametric designs where the data are not assumed to be generated by any known parametric model. Whenever parametric assumptions really hold it may be possible to do better than the optimal non-parametric design. The goal is to be able to compare any potential dose finding scheme with the optimal non-parametric benchmark. This paper makes precise what is meant by optimal in this context and also why the procedure is described as non-parametric.  相似文献   

18.
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy).This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable.  相似文献   

19.
Qiu J  Hwang JT 《Biometrics》2007,63(3):767-776
Summary Simultaneous inference for a large number, N, of parameters is a challenge. In some situations, such as microarray experiments, researchers are only interested in making inference for the K parameters corresponding to the K most extreme estimates. Hence it seems important to construct simultaneous confidence intervals for these K parameters. The naïve simultaneous confidence intervals for the K means (applied directly without taking into account the selection) have low coverage probabilities. We take an empirical Bayes approach (or an approach based on the random effect model) to construct simultaneous confidence intervals with good coverage probabilities. For N= 10,000 and K= 100, typical for microarray data, our confidence intervals could be 77% shorter than the naïve K‐dimensional simultaneous intervals.  相似文献   

20.
The aim of this research is to develop a model to describe oligosaccharide synthesis and simultaneously lactose hydrolysis. Model A (engineering approach) and model B (biochemical approach) were used to describe the data obtained in batch experiments with β‐galactosidase from Bacillus circulans at various initial lactose concentrations (from 0.19 to 0.59 mol·kg−1). A procedure was developed to fit the model parameters and to select the most suitable model. The procedure can also be used for other kinetically controlled reactions. Each experiment was considered as an independent estimation of the model parameters, and consequently, model parameters were fitted to each experiment separately. Estimation of the parameters per experiment preserved the time dependence of the measurements and yielded independent sets of parameters. The next step was to study by ordinary regression methods whether parameters were constant under the altering conditions examined. Throughout all experiments, the parameters of model B did not show a trend upon the initial lactose concentration when inhibition was included. Therefore model B, a galactosyl‐enzyme complex‐based model, was chosen to describe the oligosaccharide synthesis, and one parameter set was determined for various initial lactose concentrations. © 1999 John Wiley & Sons, Inc. Biotechnol Bioeng 64: 558–567, 1999.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号