首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
T A Louis 《Biometrics》1977,33(4):627-634
The problem of comparing two medical treatments with respect to survival is considered. Treatment outcome is assumed to follow an exponential distribution. The ratio of expected survivals associated with the two treatments is the clinical parameter of interest. A nuisance parameter is present, but it is removed by an invariance reduction and a sequential probability ratio test is applied to the invariant likelihood ratio. A class of data-dependent treatment assignment rules is identified over which the probability of correct treatment selection at the termination of the trial is approximately constant. A cost function, the weighted sum of total patients in the trial and the number assigned to the inferior treatment is introduced, and a treatment allocation rule conjectured to minimize the expected cost is constructed. Both analytic and simulation results show that it is an improvement over rules previously proposed. The methodology contained herein can be used to construct near-optimal rules in other testing contexts.  相似文献   

2.
A generalized goal using subset selection is discussed for the location parameter case. This goal is to select a non-empty subset from a set of k (k ≥ 2) treatments that contains at least one ε-best treatment with confidence level P*. For a set of treatments an ε-best treatment is defined as a treatment with location parameter on a distance less than or equal to ε(ε ≥o) from the best treatment, where best is defined as largest value of the location parameter. The efficiency of subset selection of an ε-best treatment relative to subset selection of the best treatment is investigated and is computed for some values of k and the confidence level for the Normal case as well as for the Logistic case.  相似文献   

3.
This paper introduces a class of data-dependent allocation rules for use in sequential clinical trials designed to choose the better of two competing treatments, or to decide that they are of equal efficacy. These readily understood and easily implemented rules are shown to reduce, substantially the number of tests with the poorer treatment for a broad category of experimental situations. Allocation rules of this type are applied both to trials with an instantaneous binomial response and to delayed response trials where interest centers on exponentially distributed survival time. In each case, a comparison of this design with alternative designs given in the literature shows that the proposed design is superior with respect to ease of application and is comparable to the alternatives regarding inferior treatment number and average sample number. In addition, the proposed rules mitigate many of the difficulties generally associated with adaptive assignment rules, such as selection and systematic bias.  相似文献   

4.
ABSTRACT: BACKGROUND: For gene expression or gene association studies with a large number of hypotheses the number of measurements per marker in a conventional single-stage design is often low due to limited resources. Two-stage designs have been proposed where in a first stage promising hypotheses are identified and further investigated in the second stage with larger sample sizes. For two types of two-stage designs proposed in the literature we derive multiple testing procedures controlling the False Discovery Rate (FDR) demonstrating FDR control by simulations: designs where a fixed number of top-ranked hypotheses are selected and designs where the selection in the interim analysis is based on an FDR threshold. In contrast to earlier approaches which use only the second-stage data in the hypothesis tests (pilot approach), the proposed testing procedures are based on the pooled data from both stages (integrated approach). Results: For both selection rules the multiple testing procedures control the FDR in the considered simulation scenarios. This holds for the case of independent observations across hypotheses as well as for certain correlation structures. Additionally, we show that in scenarios with small effect sizes the testing procedures based on the pooled data from both stages can give a considerable improvement in power compared to tests based on the second-stage data only. Conclusion: The proposed hypothesis tests provide a tool for FDR control for the considered two-stage designs. Comparing the integrated approaches for both selection rules with the corresponding pilot approaches showed an advantage of the integrated approach in many simulation scenarios.  相似文献   

5.
We consider the problem of comparing a set of p1 test treatments with a control treatment. This is to be accomplished in two stages as follows: In the first stage, N1 observations are allocated among the p1 treatments and the control, and the subset selection procedure of Gupta and Sobel (1958) is employed to eliminate “inferior” treatments. In the second stage, N2 observations are allocated among the (randomly) selected subset of p2(≤p1) treatments and the control, and joint confidence interval estimates of the treatment versus control differences are calculated using Dunnett's (1955) procedure. Here both N1 and N2 are assumed to be fixed in advance, and the so-called square root rule is used to allocate observations among the treatments and the control in each stage. Dunnett's procedure is applied using two different types of estimates of the treatment versus control mean differences: The unpooled estimates are based on only the data obtained in the second stage, while the pooled estimates are based on the data obtained in both stages. The procedure based on unpooled estimates uses the critical point from a p2-variate Student t-distribution, while that based on pooled estimates uses the critical point from a p1-variate Student t-distribution. The two procedures and a composite of the two are compared via Monte Carlo simulation. It is shown that the expected value of p2 determines which procedure yields shorter confidence intervals on the average. Extensions of the procedures to the case of unequal sample sizes are given. Applicability of the proposed two-stage procedures to a drug screening problem is discussed.  相似文献   

6.
Dynamic treatment regimes (DTRs) consist of a sequence of decision rules, one per stage of intervention, that aim to recommend effective treatments for individual patients according to patient information history. DTRs can be estimated from models which include interactions between treatment and a (typically small) number of covariates which are often chosen a priori. However, with increasingly large and complex data being collected, it can be difficult to know which prognostic factors might be relevant in the treatment rule. Therefore, a more data-driven approach to select these covariates might improve the estimated decision rules and simplify models to make them easier to interpret. We propose a variable selection method for DTR estimation using penalized dynamic weighted least squares. Our method has the strong heredity property, that is, an interaction term can be included in the model only if the corresponding main terms have also been selected. We show our method has both the double robustness property and the oracle property theoretically; and the newly proposed method compares favorably with other variable selection approaches in numerical studies. We further illustrate the proposed method on data from the Sequenced Treatment Alternatives to Relieve Depression study.  相似文献   

7.
Using a distribution-free approach, a modification of the usual procedure for selecting the better of two treatments is presented. Here the possibility of no selection when the treatments appear to be ‘equivalent’ is allowed. The sample size and the constant needed to implement the proposed procedure are determined by controlling the probabilities of a correct selection and a wrong selection when the two treatments are not equivalent.  相似文献   

8.
MOTIVATION: Protein expression profiling for differences indicative of early cancer holds promise for improving diagnostics. Due to their high dimensionality, statistical analysis of proteomic data from mass spectrometers is challenging in many aspects such as dimension reduction, feature subset selection as well as construction of classification rules. Search of an optimal feature subset, commonly known as the feature subset selection (FSS) problem, is an important step towards disease classification/diagnostics with biomarkers. METHODS: We develop a parsimonious threshold-independent feature selection (PTIFS) method based on the concept of area under the curve (AUC) of the receiver operating characteristic (ROC). To reduce computational complexity to a manageable level, we use a sigmoid approximation to the empirical AUC as the criterion function. Starting from an anchor feature, the PTIFS method selects a feature subset through an iterative updating algorithm. Highly correlated features that have similar discriminating power are precluded from being selected simultaneously. The classification rule is then determined from the resulting feature subset. RESULTS: The performance of the proposed approach is investigated by extensive simulation studies, and by applying the method to two mass spectrometry data sets of prostate cancer and of liver cancer. We compare the new approach with the threshold gradient descent regularization (TGDR) method. The results show that our method can achieve comparable performance to that of the TGDR method in terms of disease classification, but with fewer features selected. AVAILABILITY: Supplementary Material and the PTIFS implementations are available at http://staff.ustc.edu.cn/~ynyang/PTIFS. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.  相似文献   

9.
Many group-sequential test procedures have been proposed to meet the ethical need for interim analyses. All of these papers, however, focus their discussion on the situation where there are only one standard control and one experimental treatment. In this paper, we consider a trial with one standard control, but with more than one experimental treatment. We have developed a group-sequential test procedure to accommodate any finite number of experimental treatments. To facilitate the practical application of the proposed test procedure, on the basis of Monte Carlo simulation, we have derived the critical values of α-levels equal to 0.01, 0.05 and 0.10 for the number of experimental treatments ranging from 2 to 4 and the number of multiple group sequential analysis ranging from 1 to 10. Comparing with a single non-sequential analysis that has a reasonable power (say, 0.80), we have demonstrated that the application of the proposed test procedure may substantially reduce the required sample size without seriously sacrificing the original power.  相似文献   

10.
Relevant statistical modeling and analysis of dental data can improve diagnostic and treatment procedures. The purpose of this study is to demonstrate the use of various data mining algorithms to characterize patients with dentofacial deformities. A total of 72 patients with skeletal malocclusions who had completed orthodontic and orthognathic surgical treatments were examined. Each patient was characterized by 22 measurements related to dentofacial deformities. Clustering analysis and visualization grouped the patients into three different patterns of dentofacial deformities. A feature selection approach based on a false discovery rate was used to identify a subset of 22 measurements important in categorizing these three clusters. Finally, classification was performed to evaluate the quality of the measurements selected by the feature selection approach. The results showed that feature selection improved classification accuracy while simultaneously determining which measurements were relevant.  相似文献   

11.
C S Davis  L J Wei 《Biometrics》1988,44(4):1005-1018
In comparing the effectiveness of two treatments, suppose that nondecreasing repeated measurements of the same characteristic are scheduled to be taken over a common set of time points for each study subject. A class of univariate one-sided global asymptotically distribution-free tests is proposed to test the equality of the two treatments. The test procedures allow different patterns of missing observations in the two groups to be compared, although the missing data mechanism is required to be independent of the observations in each treatment group. Test-based point and interval estimators of the global treatment difference are given. Multiple inference procedures are also provided to examine the time trend of treatment differences over the entire study. The proposed methods are illustrated by an example from a bladder cancer study.  相似文献   

12.
A Bayesian design is proposed for randomized phase II clinical trials that screen multiple experimental treatments compared to an active control based on ordinal categorical toxicity and response. The underlying model and design account for patient heterogeneity characterized by ordered prognostic subgroups. All decision criteria are subgroup specific, including interim rules for dropping unsafe or ineffective treatments, and criteria for selecting optimal treatments at the end of the trial. The design requires an elicited utility function of the two outcomes that varies with the subgroups. Final treatment selections are based on posterior mean utilities. The methodology is illustrated by a trial of targeted agents for metastatic renal cancer, which motivated the design methodology. In the context of this application, the design is evaluated by computer simulation, including comparison to three designs that conduct separate trials within subgroups, or conduct one trial while ignoring subgroups, or base treatment selection on estimated response rates while ignoring toxicity.  相似文献   

13.
The FDA process analytical technology (PAT) initiative will materialize in a significant increase in the number of installations of spectroscopic instrumentation. However, to attain the greatest benefit from the data generated, there is a need for calibration procedures that extract the maximum information content. For example, in fermentation processes, the interpretation of the resulting spectra is challenging as a consequence of the large number of wavelengths recorded, the underlying correlation structure that is evident between the wavelengths and the impact of the measurement environment. Approaches to the development of calibration models have been based on the application of partial least squares (PLS) either to the full spectral signature or to a subset of wavelengths. This paper presents a new approach to calibration modeling that combines a wavelength selection procedure, spectral window selection (SWS), where windows of wavelengths are automatically selected which are subsequently used as the basis of the calibration model. However, due to the non-uniqueness of the windows selected when the algorithm is executed repeatedly, multiple models are constructed and these are then combined using stacking thereby increasing the robustness of the final calibration model. The methodology is applied to data generated during the monitoring of broth concentrations in an industrial fermentation process from on-line near-infrared (NIR) and mid-infrared (MIR) spectrometers. It is shown that the proposed calibration modeling procedure outperforms traditional calibration procedures, as well as enabling the identification of the critical regions of the spectra with regard to the fermentation process.  相似文献   

14.
The evolution of “informatics” technologies has the potential to generate massive databases, but the extent to which personalized medicine may be effectuated depends on the extent to which these rich databases may be utilized to advance understanding of the disease molecular profiles and ultimately integrated for treatment selection, necessitating robust methodology for dimension reduction. Yet, statistical methods proposed to address challenges arising with the high‐dimensionality of omics‐type data predominately rely on linear models and emphasize associations deriving from prognostic biomarkers. Existing methods are often limited for discovering predictive biomarkers that interact with treatment and fail to elucidate the predictive power of their resultant selection rules. In this article, we present a Bayesian predictive method for personalized treatment selection that is devised to integrate both the treatment predictive and disease prognostic characteristics of a particular patient's disease. The method appropriately characterizes the structural constraints inherent to prognostic and predictive biomarkers, and hence properly utilizes these complementary sources of information for treatment selection. The methodology is illustrated through a case study of lower grade glioma. Theoretical considerations are explored to demonstrate the manner in which treatment selection is impacted by prognostic features. Additionally, simulations based on an actual leukemia study are provided to ascertain the method's performance with respect to selection rules derived from competing methods.  相似文献   

15.
Rare variants have increasingly been cited as major contributors in the disease etiology of several complex disorders. Recently, several approaches have been proposed for analyzing the association of rare variants with disease. These approaches include collapsing rare variants, summing rare variant test statistics within a particular locus to improve power, and selecting a subset of rare variants for association testing, e.g., the step-up approach. We found that (a) if the variants being pooled are in linkage disequilibrium, the standard step-up method of selecting the best subset of variants results in loss of power compared to a model that pools all rare variants and (b) if the variants are in linkage equilibrium, performing a subset selection using step-based selection methods results in a gain of power of association compared to a model that pools all rare variants. Therefore, we propose an approach to selecting the best subset of variants to include in the model that is based on the linkage disequilibrium pattern among the rare variants. The proposed linkage disequilibrium–based variant selection model is flexible and borrows strength from the model that pools all rare variants when the rare variants are in linkage disequilibrium and from step-based selection methods when the variants are in linkage equilibrium. We performed simulations under three different realistic scenarios based on: (1) the HapMap3 dataset of the DRD2 gene, and CHRNA3/A5/B4 gene cluster (2) the block structure of linkage disequilibrium, and (3) linkage equilibrium. We proposed a permutation-based approach to control the type 1 error rate. The power comparisons after controlling the type 1 error show that the proposed linkage disequilibrium–based subset selection approach is an attractive alternative method for subset selection of rare variants.  相似文献   

16.
Dryden IL  Walker G 《Biometrics》1999,55(3):820-825
In many disciplines, it is of great importance to match objects. Procrustes analysis is a popular method for comparing labeled point configurations based on a least squares criterion. We consider alternative procedures that are highly resistant to outlier points, and we describe an application in electrophoretic gel matching. We consider procedures based on S estimators, least median of squares, and least quartile difference estimators. Practical implementation issues are discussed, including random subset selection and intelligent subset selection (where subsets with small size or near collinear subsets are ignored). The relative performances of the resistant and Procrustes methods are examined in a simulation study.  相似文献   

17.
Hellmich M 《Biometrics》2001,57(3):892-898
In order to benefit from the substantial overhead expenses of a large group sequential clinical trial, the simultaneous investigation of several competing treatments becomes more popular. If at some interim analysis any treatment arm reveals itself to be inferior to any other treatment under investigation, this inferior arm may be or may even need to be dropped for ethical and/or economic reasons. Recently proposed methods for monitoring and analysis of group sequential clinical trials with multiple treatment arms are compared and discussed. The main focus of the article is on the application and extension of (adaptive) closed testing procedures in the group sequential setting that strongly control the familywise error rate. A numerical example is given for illustration.  相似文献   

18.
The enzyme Glucoamylase [1,4-α-D-glucan-glucohydrolase EC 3.2.1.3] is very important for the food industry. It is used for producing glucose, ethanol and beer, as well as in technological processes that require the decomposition of starch. Eight mutants of the species Aspergillus niger are evaluated and tested with respect to their production of Glucoamylase and proved to be suitable. The task is to find the mutant showing the highest enzyme activity with a given precision. Conventionally, this kind of multiple decision problem is handled by the analysis of variance (Model I), which tests the homogeneity of the population means, but in this case the results do not supply the desired information. Provided that the enzyme activities of the mutants are different, selection procedures can be used to choose the mutant with the “best” or at least a “good” level of activity. In this paper, a short methodical summary about the two classes of selection procedures is given, i.e. the indifference zone (and d-correct) procedures and the subset procedures. By the example of the selection of a mutant with high enzyme activity the planning of experiments is shown. Depending on suppositions about the variances, different selection rules are applied. Starting with the subset procedure of GUPTA, the number of mutants is reduced to seven. The following application of the d-correct procedures of BECHHOFER, DUNNETT and SOBEL allow us to calculate the necessary sample size of n = 49. Then the mutant whose sample has the largest mean will be selected as a “good” one with a given precision of d = 4 [u/l] and a probability of correct selection of (1–β) = 0.9
  • 1 This application is result of a cooperation between the Dept. of Food of the Technical University, Berlin, and the Dept. of Biotechnology of the Higher Institute of Food and Flavour Industry, Plovdiv, sponsored by the DAAD andthe TU Berlin.
  •   相似文献   

    19.
    The problem of comparing k(≧2) bernoulli rates of success with a control is considered. An one-stage decision procedure is proposed for either (1) choosing the best among several experimental treatments and the control treatment when the best is significantly superior or (2) selecting a random size subset that contains the best experimental treatment if it is better than the control when the difference between the best and the remaining treatments is not significant. We integrate two traditional formulations, namely, the indifference (IZ) approach and the subset selection (SS) approach, by seperating the parameter space into two disjoint sets, the preference zone (PZ) and the indifference zone (IZ). In the PZ we insist on selecting the best experimental treatment for a correct selection (CS1) but in the IZ we define any selected subset to be correct (CS2) if it contains the best experimental treatment which is also better than the control. We propose a procedure R to guarantee lower bounds P1* for P(CS1PZ) and P2* for P(CS2IZ) simultaneously. A brief table on the common sample size and the procedure parameters is presented to illustrate the procedure R.  相似文献   

    20.
    We consider hypothesis testing in a clinical trial with an interim treatment selection. Recently, unconditional and conditional procedures for selecting one treatment as the winner have been proposed when the mean responses are approximately normal. In this paper, we generalize both procedures to multi-winner cases. The distributions of the test statistics are obtained and step-down approaches are proposed. We prove that both unconditional and conditional procedures strongly control the family-wise error rate. We give a brief discussion on power comparisons.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号