首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Gutjahr G  Brannath W  Bauer P 《Biometrics》2011,67(3):1039-1046
In the presence of nuisance parameters, the conditional error rate principle is difficult to apply because of the dependency of the conditional error function of the preplanned test on nuisance parameters. To use the conditional error rate principle with nuisance parameters, we propose to search among tests that guarantee overall error control for the test that maximizes a weighted combination of the conditional error rates over possible values of the nuisance parameters. We show that the optimization problem that defines such a test can be solved efficiently by existing algorithms.  相似文献   

2.
How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample.  相似文献   

3.
Two-stage, drop-the-losers designs for adaptive treatment selection have been considered by many authors. The distributions of conditional sufficient statistics and the Rao-Blackwell technique were used to obtain an unbiased estimate and to construct an exact confidence interval for the parameter of interest. In this paper, we characterize the selection process from a binomial drop-the-losers design using a truncated binomial distribution. We propose a new estimator and show that it is asymptotically consistent with a large sample size in either the first stage or the second stage. Supported by simulation analyses, we recommend the new estimator over the naive estimator and the Rao-Blackwell-type estimator based on its robustness in the finite-sample setting. We frame the concept as a simple and easily implemented procedure for phase 2 oncology trial design that can be confirmatory in nature, and we use an example to illustrate its application.  相似文献   

4.
Brannath W  Bauer P 《Biometrics》2004,60(3):715-723
Ethical considerations and the competitive environment of clinical trials usually require that any given trial have sufficient power to detect a treatment advance. If at an interim analysis the available data are used to decide whether the trial is promising enough to be continued, investigators and sponsors often wish to have a high conditional power, which is the probability to reject the null hypothesis given the interim data and the alternative of interest. Under this requirement a design with interim sample size recalculation, which keeps the overall and conditional power at a prespecified value and preserves the overall type I error rate, is a reasonable alternative to a classical group sequential design, in which the conditional power is often too small. In this article two-stage designs with control of overall and conditional power are constructed that minimize the expected sample size, either for a simple point alternative or for a random mixture of alternatives given by a prior density for the efficacy parameter. The presented optimality result applies to trials with and without an interim hypothesis test; in addition, one can account for constraints such as a minimal sample size for the second stage. The optimal designs will be illustrated with an example, and will be compared to the frequently considered method of using the conditional type I error level of a group sequential design.  相似文献   

5.
Using a heuristic separation-of-time-scales argument, we describe the behavior of the conditional ancestral selection graph with very strong balancing selection between a pair of alleles. In the limit as the strength of selection tends to infinity, we find that the ancestral process converges to a neutral structured coalescent, with two subpopulations representing the two alleles and mutation playing the role of migration. This agrees with a previous result of Kaplan et al., obtained using a different approach. We present the results of computer simulations to support our heuristic mathematical results. We also present a more rigorous demonstration that the neutral conditional ancestral process converges to the Kingman coalescent in the limit as the mutation rate tends to infinity.  相似文献   

6.
Summary Various methods of evaluating phenotypic stability have been proposed; however, no single method can adequately describe cultivar performance. The objectives of this study were to integrate a number of methods of evaluating stability and to use this approach for cultivar selection. These objectives were considered in the context of the broad-based oilseed rape cultivar (Brassica napus spp. oleifera) evaluation system currently used in western Canada. Regression analysis was used to assess cultivar response to environments. Cluster analysis was used to assemble cultivars into groups with similar regression coefficients (b i ) and mean yield. Three parametric stability parameters, years within locations mean square (MS; Y/L), Shukla's stability variance ( i 2 ), and Francis and Kannenberg's coefficient of variability (CV i ), were compared to determine which method would be most suitable for selection of oilseed rape cultivars from within clustered groups. Yield data from three cultivars and six breeding lines that had been tested for 2 years at 26 locations in the Western Canola Cooperative Test A were used for all calculations. The cluster analysis was successful in identifying commercially acceptable breeding lines. The parameter MS i Y/L was considered to be more appropriate than either CV i or i 2 , because it measured only the unpredictable portion of the genotype x environment interaction and was independent of the other cultivars in the test. The use of cluster analysis to group entries with similar b i values and mean yields, followed by selection for stability within groups, was advocated.Contribution No. 846 of the Plant Science Department, University of Manitoba  相似文献   

7.
8.
Traditionally drug development is generally divided into three phases which have different aims and objectives. Recently so-called adaptive seamless designs that allow combination of the objectives of different development phases into a single trial have gained much interest. Adaptive trials combining treatment selection typical for Phase II and confirmation of efficacy as in Phase III are referred to as adaptive seamless Phase II/III designs and are considered in this paper. We compared four methods for adaptive treatment selection, namely the classical Dunnett test, an adaptive version of the Dunnett test based on the conditional error approach, the combination test approach, and an approach within the classical group-sequential framework. The latter two approaches have only recently been published. In a simulation study we found that no one method dominates the others in terms of power apart from the adaptive Dunnett test that dominates the classical Dunnett by construction. Furthermore, scenarios under which one approach outperforms others are described.  相似文献   

9.
The evolution of human mtDNA (mitochondrial DNA) has been characterized by the emergence of distinct haplogroups, which are associated with the major global ethnic groups and defined by the presence of specific mtDNA polymorphic variants. A recent analysis of complete mtDNA genome sequences has suggested that certain mtDNA haplogroups may have been positively selected as humans populated colder climates due to a decreased mitochondrial coupling efficiency, in turn leading to increased generation of heat instead of ATP synthesis by oxidative phosphorylation. If this is true, implying different evolutionary processes in different haplogroups, this could potentially void the usefulness of mtDNA as a genetic tool to study the timing of major events in evolutionary history. In this issue of the Biochemical Journal, Taku Amo and Martin Brand present experimental biochemical data to test this hypothesis. Measurements of the bioenergetic capacity of cybrid cells harbouring specific Arctic or tropical climate mtDNA haplogroups on a control nuclear background reveal no significant changes in coupling efficiency between the two groups, indicating that mtDNA remains a viable evolutionary tool to assess the timing of major events in the history of humans and other species.  相似文献   

10.
This paper proposes a two-stage phase I-II clinical trial design to optimize dose-schedule regimes of an experimental agent within ordered disease subgroups in terms of the toxicity-efficacy trade-off. The design is motivated by settings where prior biological information indicates it is certain that efficacy will improve with ordinal subgroup level. We formulate a flexible Bayesian hierarchical model to account for associations among subgroups and regimes, and to characterize ordered subgroup effects. Sequentially adaptive decision-making is complicated by the problem, arising from the motivating application, that efficacy is scored on day 90 and toxicity is evaluated within 30 days from the start of therapy, while the patient accrual rate is fast relative to these outcome evaluation intervals. To deal with this in a practical manner, we take a likelihood-based approach that treats unobserved toxicity and efficacy outcomes as missing values, and use elicited utilities that quantify the efficacy-toxicity trade-off as a decision criterion. Adaptive randomization is used to assign patients to regimes while accounting for subgroups, with randomization probabilities depending on the posterior predictive distributions of utilities. A simulation study is presented to evaluate the design's performance under a variety of scenarios, and to assess its sensitivity to the amount of missing data, the prior, and model misspecification.  相似文献   

11.
Feng  Wentao; Wahed  Abdus S. 《Biometrika》2008,95(3):695-707
In two-stage adaptive treatment strategies, patients receivean induction treatment followed by a maintenance therapy, giventhat the patient responded to the induction treatment they received.To test for a difference in the effects of different inductionand maintenance treatment combinations, a modified supremumweighted log-rank test is proposed. The test is applied to adataset from a two-stage randomized trial and the results arecompared to those obtained using a standard weighted log-ranktest. A sample-size formula is proposed based on the limitingdistribution of the supremum weighted log-rank statistic. Thesample-size formula reduces to Eng and Kosorok's sample-sizeformula for a two-sample supremum log-rank test when there isno second randomization. Monte Carlo studies show that the proposedtest provides sample sizes that are close to those obtainedby standard weighted log-rank test under a proportional hazardsalternative. However, the proposed test is more powerful thanthe standard weighted log-rank test under non-proportional hazardsalternatives.  相似文献   

12.

Background  

Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data.  相似文献   

13.
Patient-specific biomechanical modeling of atherosclerotic arteries has the potential to aid clinicians in characterizing lesions and determining optimal treatment plans. To attain high levels of accuracy, recent models use medical imaging data to determine plaque component boundaries in three dimensions, and fluid–structure interaction is used to capture mechanical loading of the diseased vessel. As the plaque components and vessel wall are often highly complex in shape, constructing a suitable structured computational mesh is very challenging and can require a great deal of time. Models based on unstructured computational meshes require relatively less time to construct and are capable of accurately representing plaque components in three dimensions. These models unfortunately require additional computational resources and computing time for accurate and meaningful results. A two-stage modeling strategy based on unstructured computational meshes is proposed to achieve a reasonable balance between meshing difficulty and computational resource and time demand. In this method, a coarsegrained simulation of the full arterial domain is used to guide and constrain a fine-scale simulation of a smaller region of interest within the full domain. Results for a patient-specific carotid bifurcation model demonstrate that the two-stage approach can afford a large savings in both time for mesh generation and time and resources needed for computation. The effects of solid and fluid domain truncation were explored, and were shown to minimally affect accuracy of the stress fields predicted with the two-stage approach.  相似文献   

14.
Lin W  Wu FX  Shi J  Ding J  Zhang W 《Proteomics》2011,11(19):3773-3778
In our recent work on denoising, a linear combination of five features was used to adjust the peak intensities in tandem mass spectra. Although the method showed a promise, the coefficients (weights) of the linear combination were fixed and determined empirically. In this paper, we proposed an adaptive approach for estimating these weights. The proposed approach: (i) calculates the score for each peak in a data set with the previous empirically determined weights, (ii) selects the training data set based on the scores of peaks, (iii) applies the linear discriminant analysis to the training data set and takes the solution of linear discriminant analysis as the new weights, (iv) calculates the score again with the new weights, (v) repeats (ii)-(iv) until the weights have no significant change. After getting the final weights, the proposed approach follows the previous methods. The proposed approach was applied to two tandem mass spectra data sets: ISB (with low resolution) and TOV-Q (with high resolution) to evaluate its performance. The results show that about 66% of peaks (likely noise peaks) can be removed and that the number of peptides identified by MASCOT increases by 14 and 23.4% for ISB and TOV-Q data set, respectively, compared to the previous work.  相似文献   

15.
The widespread deployment of the advanced computer technology in business and industries has demanded the high standard on quality of service (QoS). For example, many Internet applications, i.e. online trading, e-commerce, and real-time databases, etc., execute in an unpredictable general-purpose environment but require performance guarantees. Failure to meet performance specifications may result in losing business or liability violations. As systems become distributed and complex, it has become a challenge for QoS design. The ability of on-line identification and auto-tuning of adaptive control systems has made the adaptive control theoretical design an attractive approach for QoS design. However, there is an inherent constraint in adaptive control systems, i.e. a conflict between asymptotically good control and asymptotically good on-line identification. This paper first identifies and analyzes the limitations of adaptive control for network QoS by extensive simulation studies. Secondly, as an approach to mitigate the limitations, we propose an adaptive dual control framework. By incorporating the existing uncertainty of on-line prediction into the control strategy and accelerating the parameter estimation process, the adaptive dual control framework optimizes the tradeoff between the control goal and the uncertainty, and demonstrates robust and cautious behavior. The experimental study shows that the adaptive dual control framework mitigate the limitations of the conventional adaptive control framework. Compared with the conventional adaptive control framework under the medium uncertainty, the adaptive dual control framework reduces the deviation from the desired hit-rate ratio from 40% to 13%.
Haowei BaiEmail:
  相似文献   

16.
17.
Intensified and continuous processes require fast and robust methods and technologies to monitor product titer for faster analytical turnaround time, process monitoring, and process control. The current titer measurements are mostly offline chromatography-based methods which may take hours or even days to get the results back from the analytical labs. Thus, offline methods will not meet the requirement of real time titer measurements for continuous production and capture processes. FTIR and chemometric based multivariate modeling are promising tools for real time titer monitoring in clarified bulk (CB) harvests and perfusate lines. However, empirical models are known to be vulnerable to unseen variability, specifically a FTIR chemometric titer model trained on a given biological molecule and process conditions often fails to provide accurate predictions of titer in another molecule under different process conditions. In this study, we developed an adaptive modeling strategy: the model was initially built using a calibration set of available perfusate and CB samples and then updated by augmenting spiking samples of the new molecules to the calibration set to make the model robust against perfusate or CB harvest of the new molecule. This strategy substantially improved the model performance and significantly reduced the modeling effort for new molecules.  相似文献   

18.
Some numerical results are presented for generalized ridge regression where the additive constants are based on the data. The adaptive estimator so obtained is compared with the least-squares estimator on the basis of mean square error (MSE). It is shown that the MSE of each component of the vector of ridge estimators may be as low as 47.1% of the variance of the corresponding component of the least squares vector or as high as 125.2%.  相似文献   

19.
Compartmentalization of unlinked, competing templates is widely accepted as a necessary step towards the evolution of complex organisms. However, preservation of information by templates confined to isolated vesicles of finite size faces much harder obstacles than by free templates: random drift allied to mutation pressure wipe out any template that does not replicate perfectly, no matter how small the error probability might be. In addition, drift alone hinders the coexistence of distinct templates in a same compartment. Here, we investigate the conditions for group selection to prevail over drift and mutation and hence to guarantee the maintenance and coexistence of distinct templates in a vesicle. Group selection is implemented through a vesicle survival probability that depends on the template composition. By considering the limit case of an infinite number of vesicles, each one carrying a finite number of templates, we were able to derive a set of recursion equations for the frequencies of vesicles with different template compositions. Numerical iteration of these recursions allows the exact characterization of the steady state of the vesicle population-a quasispecies of vesicles-thus revealing the values of the mutation and group selection intensities for which template coexistence is possible. Within the main assumption of the model-a fixed, finite or infinite, number of vesicles-we find no fundamental impediment to the coexistence of an arbitrary number of template types with the same replication rate inside a vesicle, except of course for the vesicle capacity. Group selection in the form of vesicle selection is a must for compartmentalized primordial genetic systems even in the absence of intra-genomic competition of different templates.  相似文献   

20.
This investigation was undertaken to assess the sensitivity and specificity of the genotyping error detection function of the computer program SIMWALK2. We chose to examine chromosome 22, which had 7 microsatellite markers, from a single simulated replicate (330 pedigrees with a pattern of missing genotype data similar to the Framingham families). We created genotype errors at five overall frequencies (0.0, 0.025, 0.050, 0.075, and 0.100) and applied SIMWALK2 to each of these five data sets, respectively assuming that the total error rate (specified in the program), was at each of these same five levels. In this data set, up to an assumed error rate of 10%, only 50% of the Mendelian-consistent mistypings were found under any level of true errors. And since as many as 70% of the errors detected were false-positives, blanking suspect genotypes (at any error probability) will result in a reduction of statistical power due to the concomitant blanking of correctly typed alleles. This work supports the conclusion that allowing for genotyping errors within likelihood calculations during statistical analysis may be preferable to choosing an arbitrary cut-off.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号