首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In survival clinical trials the response, i.e. (time to) death is delayed. As a result there are at any time patients that have not responded, yet. Given a stopping rule based on the number of deaths, the distribution of the total number of patients that will enter the trial is examined. Besides, a simple approximation to the expected number of patients is presented.  相似文献   

2.
3.
药物研发的复杂性与日俱增,而大数据时代的到来使得临床试验的进展大大加快。本期“临床试验进展”讨论了皮肤病学新创试验中面向数据的亮点,探讨了现实采用的大数据方法,剖析了对风险导向监察的新兴方法学。此外还汇总了银屑病和特应性皮炎新疗法的临床研究报告,揭示了这些疾病的影响以及Ⅱ期和Ⅲ期研究中已经取得成功的候选药。  相似文献   

4.
5.
For designs with longitudinal observations of ordered categorical data, a nonparametric model is considered where treatment effects and interactions are defined by means of the marginal distributions. These treatment effects are estimated consistently by ranking methods. The hypotheses in this nonparametric setup are formulated by means of the distribution functions. The asymptotic distribution of the estimators for the nonparametric effects are given under the hypotheses. For small samples, a rather accurate approximation is suggested. A clinical trial with ordered categorical data is used to motivate the ideas and to explain the procedures which are extensions of the Wilcoxon‐Mann‐Whitney test to factorial designs with longitudinal observations. The application of the procedures requires only some trivial regularity assumptions.  相似文献   

6.
将临床研究数据用于临床日常规范及健康相关决策的制定对于改善全球医疗保健至关重要。汤森路透Cortellis 临床试验情报对临床试验数据的应用价值及各国临床实验室质量管理规范的实施情况进行了介绍,提供描绘临床图景关键元素和当前趋势的专家分析,从而指导临床开发决策。  相似文献   

7.
Point estimation in group sequential and adaptive trials is an important issue in analysing a clinical trial. Most literature in this area is only concerned with estimation after completion of a trial. Since adaptive designs allow reassessment of sample size during the trial, reliable point estimation of the true effect when continuing the trial is additionally needed. We present a bias adjusted estimator which allows a more exact sample size determination based on the conditional power principle than the naive sample mean does.  相似文献   

8.
This paper introduces a class of data-dependent allocation rules for use in sequential clinical trials designed to choose the better of two competing treatments, or to decide that they are of equal efficacy. These readily understood and easily implemented rules are shown to reduce, substantially the number of tests with the poorer treatment for a broad category of experimental situations. Allocation rules of this type are applied both to trials with an instantaneous binomial response and to delayed response trials where interest centers on exponentially distributed survival time. In each case, a comparison of this design with alternative designs given in the literature shows that the proposed design is superior with respect to ease of application and is comparable to the alternatives regarding inferior treatment number and average sample number. In addition, the proposed rules mitigate many of the difficulties generally associated with adaptive assignment rules, such as selection and systematic bias.  相似文献   

9.
In an ongoing clinical trial and while the randomized treatment codes remain blinded, it is often desirable to estimate the treatment difference and standard deviation for a normally‐distributed response variable. This is particularly useful for estimating the sample size for future trials or for adjusting the sample size for the ongoing trial. We describe the limitations of an available EM algorithm‐based procedure to reestimate the standard deviation without unblinding the codes for the two treatments. We introduce a new procedure and propose a clinical trial design for estimating both the treatment difference and standard deviation without unblinding. The performance of the proposed procedure is evaluated in a simulation study.  相似文献   

10.
《Endocrine practice》2012,18(2):227-237
ObjectiveTo explore, by post hoc analyses of pooled data, the efficacy and safety of the use of exenatide twice daily (BID) in patients stratified by baseline glucose-low ering therapies.MethodsPatients with type 2 diabetes from long term randomized controlled trials who were treated with exenatide BID were classified into concomitant medica tion groups on the basis of background treatment (diet and exercise only, metformin only, sulfonylurea only, thia zolidinedione only, metformin + sulfonylurea, metformin + thiazolidinedione, or insulin with or without other oral antihyperglycemic medications). Seventeen studies were included in the analyses (N = 2,096).ResultsIn these analyses of patients treated with exenatide BID for 12 to 30 weeks, there were significant decreases from baseline in hemoglobin A1c (A1C) and fast ing glucose levels in all groups and significant decreases from baseline in body weight in all groups except the thi azolidinedione-only group. The decrease in A1C appeared to be greater in the insulin group than in the other groups, likely because the insulin dose was titrated whereas doses of concomitant antihyperglycemic medications were gen erally not titrated. Overall, changes in blood pressure and lipids were small. Across all groups, the most common adverse effects were gastrointestinal events. Hypoglycemia was more common in the sulfonylurea-only, metformin + sulfonylurea, and insulin groups than it was in the other concomitant medication groups.ConclusionThe use of exenatide BID across a wide range of background therapies was associated with reductions in A1C, fasting glucose, and body weight. Gastrointestinal adverse events were common. (Endocr Pract. 2012;18:227-237)  相似文献   

11.
High-frequency measurements are increasingly available and used to model ecosystem processes. This growing capability provides the opportunity to resolve key drivers of ecosystem processes at a variety of scales. We use a unique series of high-frequency measures of potential predictors to analyze daily variation in rates of gross primary production (GPP), respiration (R), and net ecosystem production (NEP = GPP − R) for two north temperate lakes. Wind speed, temperature, light, precipitation, mixed layer depth, water column stability, chlorophyll a, chromophoric dissolved organic matter (CDOM), and zooplankton biomass were measured at daily or higher-frequency intervals over two summer seasons. We hypothesized that light, chlorophyll a, and zooplankton biomass would be strongly related to variability in GPP. We also hypothesized that chlorophyll a, CDOM, and temperature would be most strongly related to variability in R, whereas NEP would be related to variation in chlorophyll a and CDOM. Consistent with our hypotheses, chlorophyll a was among the most important drivers of GPP, R, and NEP in these systems. However, multiple regression models did not necessarily include the other variables we hypothesized as most important. Despite the large number of potential predictor variables, substantial variance remained unexplained and models were inconsistent between years and between lakes. Drivers of GPP, R, and NEP were difficult to resolve at daily time scales where strong seasonal dynamics were absent. More complex models with greater integration of physical processes are needed to better identify the underlying drivers of short-term variability of ecosystem processes in lakes and other systems.  相似文献   

12.
We consider sample size determination for ordered categorical data when the alternative assumption is the proportional odds model. In this paper the sample size formula proposed by Whitehead (Statistics in Medicine, 12 , 2257–2271, 1993) is compared with the methods based on exact and asymptotic linear rank tests with Wilcoxon and trend scores. We show that Whitehead's formula, which is based on a normal approximation, works well when the sample size is moderate to large but recommend the exact method with Wilcoxon scores for small sample sizes. The consequences of misspecification in models are also investigated.  相似文献   

13.
Z Jiang  L Wang  C Li  J Xia  H Jia 《PloS one》2012,7(9):e44013
Group sequential design has been widely applied in clinical trials in the past few decades. The sample size estimation is a vital concern of sponsors and investigators. Especially in the survival group sequential trials, it is a thorny question because of its ambiguous distributional form, censored data and different definition of information time. A practical and easy-to-use simulation-based method is proposed for multi-stage two-arm survival group sequential design in the article and its SAS program is available. Besides the exponential distribution, which is usually assumed for survival data, the Weibull distribution is considered here. The incorporation of the probability of discontinuation in the simulation leads to the more accurate estimate. The assessment indexes calculated in the simulation are helpful to the determination of number and timing of the interim analysis. The use of the method in the survival group sequential trials is illustrated and the effects of the varied shape parameter on the sample size under the Weibull distribution are explored by employing an example. According to the simulation results, a method to estimate the shape parameter of the Weibull distribution is proposed based on the median survival time of the test drug and the hazard ratio, which are prespecified by the investigators and other participants. 10+ simulations are recommended to achieve the robust estimate of the sample size. Furthermore, the method is still applicable in adaptive design if the strategy of sample size scheme determination is adopted when designing or the minor modifications on the program are made.  相似文献   

14.
Summary In a typical randomized clinical trial, a continuous variable of interest (e.g., bone density) is measured at baseline and fixed postbaseline time points. The resulting longitudinal data, often incomplete due to dropouts and other reasons, are commonly analyzed using parametric likelihood‐based methods that assume multivariate normality of the response vector. If the normality assumption is deemed untenable, then semiparametric methods such as (weighted) generalized estimating equations are considered. We propose an alternate approach in which the missing data problem is tackled using multiple imputation, and each imputed dataset is analyzed using robust regression (M‐estimation; Huber, 1973 , Annals of Statistics 1, 799–821.) to protect against potential non‐normality/outliers in the original or imputed dataset. The robust analysis results from each imputed dataset are combined for overall estimation and inference using either the simple Rubin (1987 , Multiple Imputation for Nonresponse in Surveys, New York: Wiley) method, or the more complex but potentially more accurate Robins and Wang (2000 , Biometrika 87, 113–124.) method. We use simulations to show that our proposed approach performs at least as well as the standard methods under normality, but is notably better under both elliptically symmetric and asymmetric non‐normal distributions. A clinical trial example is used for illustration.  相似文献   

15.
16.
This paper reviews theoretical bases, experimental investigations, and the practice of using allozyme variability of marine invertebrate populations for environmental monitoring. The causes of unsuccessful attempts and the difficulties that researchers face are discussed. A number of recommendations are proposed.  相似文献   

17.
Several methods exist for testing the bioequivalence in bioavailability trials. In this article we propose a method for testing the equivalence of two drugs in clinical trials when the response variable is assumed to have a normal distribution. The null and alternative hypotheses are formulated in a nonconventional manner. Computing aspects for the power and sample size are discussed.  相似文献   

18.
19.
Incomplete data are a serious problem in the multivariate analysis of clinical trials. Usually a complete-case analysis is performed: All incomplete observation vectors are excluded from the analysis. Provided that observations are missing randomly, an easy-to-handle available-case analysis is introduced, allowing the analysis of all data without insertion or deletion of observations. This method is applied to parametric and nonparametric test procedures of the O'Brien type, which are more powerful than the conventional Hotelling's T2 for detecting alternatives where the (treatment) effect has the same direction for all observed variables. In addition, the applicability of these so-called directional tests, especially in the case of small samples, and their pros and cons are discussed.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号