共查询到20条相似文献,搜索用时 15 毫秒
1.
Alessandro Baldi Antognini Marco Novelli Maroussa Zagoraiou Alessandro Vagheggini 《Biometrical journal. Biometrische Zeitschrift》2020,62(7):1730-1746
The aim of the present paper is to provide optimal allocations for comparative clinical trials with survival outcomes. The suggested targets are derived adopting a compound optimization strategy based on a subjective weighting of the relative importance of inferential demands and ethical concerns. The ensuing compound optimal targets are continuous functions of the treatment effects, so we provide the conditions under which they can be approached by standard response-adaptive randomization procedures, also guaranteeing the applicability of the classical asymptotic inference. The operating characteristics of the suggested methodology are verified both theoretically and by simulation, including the robustness to model misspecification. With respect to the other available proposals, our strategy always assigns more patients to the best treatment without compromising inference, taking into account estimation efficiency and power as well. We illustrate our procedure by redesigning two real oncological trials. 相似文献
2.
3.
Linear Bayes estimators of the potency curve in bioassay 总被引:1,自引:0,他引:1
4.
Yovaninna Alarcn‐Soto Klaus Langohr Csaba Fehr Felipe García Guadalupe Gmez 《Biometrical journal. Biometrische Zeitschrift》2019,61(2):299-318
We present a method to fit a mixed effects Cox model with interval‐censored data. Our proposal is based on a multiple imputation approach that uses the truncated Weibull distribution to replace the interval‐censored data by imputed survival times and then uses established mixed effects Cox methods for right‐censored data. Interval‐censored data were encountered in a database corresponding to a recompilation of retrospective data from eight analytical treatment interruption (ATI) studies in 158 human immunodeficiency virus (HIV) positive combination antiretroviral treatment (cART) suppressed individuals. The main variable of interest is the time to viral rebound, which is defined as the increase of serum viral load (VL) to detectable levels in a patient with previously undetectable VL, as a consequence of the interruption of cART. Another aspect of interest of the analysis is to consider the fact that the data come from different studies based on different grounds and that we have several assessments on the same patient. In order to handle this extra variability, we frame the problem into a mixed effects Cox model that considers a random intercept per subject as well as correlated random intercept and slope for pre‐cART VL per study. Our procedure has been implemented in R using two packages: truncdist and coxme , and can be applied to any data set that presents both interval‐censored survival times and a grouped data structure that could be treated as a random effect in a regression model. The properties of the parameter estimators obtained with our proposed method are addressed through a simulation study. 相似文献
5.
In many fields and applications, count data can be subject to delayed reporting. This is where the total count, such as the number of disease cases contracted in a given week, may not be immediately available, instead arriving in parts over time. For short-term decision making, the statistical challenge lies in predicting the total count based on any observed partial counts, along with a robust quantification of uncertainty. We discuss previous approaches to modeling delayed reporting and present a multivariate hierarchical framework where the count generating process and delay mechanism are modeled simultaneously in a flexible way. This framework can also be easily adapted to allow for the presence of underreporting in the final observed count. To illustrate our approach and to compare it with existing frameworks, we present a case study of reported dengue fever cases in Rio de Janeiro. Based on both within-sample and out-of-sample posterior predictive model checking and arguments of interpretability, adaptability, and computational efficiency, we discuss the relative merits of different approaches. 相似文献
6.
7.
8.
Ha M. Dang Todd Alonzo Meredith Franklin Wendy J. Mack Mark D. Krailo Sandrah P. Eckel 《Biometrical journal. Biometrische Zeitschrift》2020,62(8):1960-1972
For a Phase III randomized trial that compares survival outcomes between an experimental treatment versus a standard therapy, interim monitoring analysis is used to potentially terminate the study early based on efficacy. To preserve the nominal Type I error rate, alpha spending methods and information fractions are used to compute appropriate rejection boundaries in studies with planned interim analyses. For a one-sided trial design applied to a scenario in which the experimental therapy is superior to the standard therapy, interim monitoring should provide the opportunity to stop the trial prior to full follow-up and conclude that the experimental therapy is superior. This paper proposes a method called total control only (TCO) for estimating the information fraction based on the number of events within the standard treatment regimen. Based on theoretical derivations and simulation studies, for a maximum duration superiority design, the TCO method is not influenced by departure from the designed hazard ratio, is sensitive to detecting treatment differences, and preserves the Type I error rate compared to information fraction estimation methods that are based on total observed events. The TCO method is simple to apply, provides unbiased estimates of the information fraction, and does not rely on statistical assumptions that are impossible to verify at the design stage. For these reasons, the TCO method is a good approach when designing a maximum duration superiority trial with planned interim monitoring analyses. 相似文献
9.
Pauly GB Hillis DM Cannatella DC 《Evolution; international journal of organic evolution》2004,58(11):2517-2535
Previous hypotheses of phylogenetic relationships among Nearctic toads (Bufonidae) and their congeners suggest contradictory biogeographic histories. These hypotheses argue that the Nearctic Bufo are: (1) a polyphyletic assemblage resulting from multiple colonizations from Africa; (2) a paraphyletic assemblage resulting from a single colonization event from South America with subsequent dispersal into Eurasia; or (3) a monophyletic group derived from the Neotropics. We obtained approximately 2.5 kb of mitochondrial DNA sequence data for the 12S, 16S, and intervening valine tRNA gene from 82 individuals representing 56 species and used parametric bootstrapping to test hypotheses of the biogeographic history of the Nearctic Bufo. We find that the Nearctic species of Bufo are monophyletic and nested within a large clade of New World Bufo to the exclusion of Eurasian and African taxa. This suggests that Nearctic Bufo result from a single colonization from the Neotropics. More generally, we demonstrate the utility of parametric bootstrapping for testing alternative biogeographic hypotheses. Through parametric bootstrapping, we refute several previously published biogeographic hypotheses regarding Bufo. These previous studies may have been influenced by homoplasy in osteological characters. Given the Neotropical origin for Nearctic Bufo, we examine current distributional patterns to assess whether the Nearctic-Neotropical boundary is a broad transition zone or a narrow boundary. We also survey fossil and paleogeographic evidence to examine potential Tertiary and Cretaceous dispersal routes, including the Paleocene Isthmian Link, the Antillean and Aves Ridges, and the current Central American Land Bridge, that may have allowed colonization of the Nearctic. 相似文献
10.
Xinyuan Chen Michael O. Harhay Fan Li 《Biometrical journal. Biometrische Zeitschrift》2023,65(6):2200002
For multicenter randomized trials or multilevel observational studies, the Cox regression model has long been the primary approach to study the effects of covariates on time-to-event outcomes. A critical assumption of the Cox model is the proportionality of the hazard functions for modeled covariates, violations of which can result in ambiguous interpretations of the hazard ratio estimates. To address this issue, the restricted mean survival time (RMST), defined as the mean survival time up to a fixed time in a target population, has been recommended as a model-free target parameter. In this article, we generalize the RMST regression model to clustered data by directly modeling the RMST as a continuous function of restriction times with covariates while properly accounting for within-cluster correlations to achieve valid inference. The proposed method estimates regression coefficients via weighted generalized estimating equations, coupled with a cluster-robust sandwich variance estimator to achieve asymptotically valid inference with a sufficient number of clusters. In small-sample scenarios where a limited number of clusters are available, however, the proposed sandwich variance estimator can exhibit negative bias in capturing the variability of regression coefficient estimates. To overcome this limitation, we further propose and examine bias-corrected sandwich variance estimators to reduce the negative bias of the cluster-robust sandwich variance estimator. We study the finite-sample operating characteristics of proposed methods through simulations and reanalyze two multicenter randomized trials. 相似文献
11.
Olivier Bouaziz 《Biometrical journal. Biometrische Zeitschrift》2023,65(4):2200071
In the context of right-censored and interval-censored data, we develop asymptotic formulas to compute pseudo-observations for the survival function and the restricted mean survival time (RMST). These formulas are based on the original estimators and do not involve computation of the jackknife estimators. For right-censored data, Von Mises expansions of the Kaplan–Meier estimator are used to derive the pseudo-observations. For interval-censored data, a general class of parametric models for the survival function is studied. An asymptotic representation of the pseudo-observations is derived involving the Hessian matrix and the score vector. Theoretical results that justify the use of pseudo-observations in regression are also derived. The formula is illustrated on the piecewise-constant-hazard model for the RMST. The proposed approximations are extremely accurate, even for small sample sizes, as illustrated by Monte Carlo simulations and real data. We also study the gain in terms of computation time, as compared to the original jackknife method, which can be substantial for a large dataset. 相似文献
12.
For sample size calculation in clinical trials with survival endpoints, the logrank test, which is the optimal method under the proportional hazard (PH) assumption, is predominantly used. In reality, the PH assumption may not hold. For example, in immuno-oncology trials, delayed treatment effects are often expected. The sample size without considering the potential violation of the PH assumption may lead to an underpowered study. In recent years, combination tests such as the maximum weighted logrank test have received great attention because of their robust performance in various hazards scenarios. In this paper, we propose a flexible simulation-free procedure to calculate the sample size using combination tests. The procedure extends the Lakatos' Markov model and allows for complex situations encountered in a clinical trial, like staggered entry, dropouts, etc. We evaluate the procedure using two maximum weighted logrank tests, one projection-type test, and three other commonly used tests under various hazards scenarios. The simulation studies show that the proposed method can achieve the target power for all compared tests in most scenarios. The combination tests exhibit robust performance under correct specification and misspecification scenarios and are highly recommended when the hazard-changing patterns are unknown beforehand. Finally, we demonstrate our method using two clinical trial examples and provide suggestions about the sample size calculations under nonproportional hazards. 相似文献
13.
Rosner GL 《Biometrics》2005,61(1):239-245
This article presents an aid for monitoring clinical trials with failure-time endpoints based on the Bayesian nonparametric analyses of the data. The posterior distribution is a mixture of Dirichlet processes in the presence of censoring if one assumes a Dirichlet process prior for the survival distribution. Using Gibbs sampling, one can generate random samples from the posterior distribution. With samples from the posterior distributions of treatment-specific survival curves, one can evaluate the current evidence in favor of stopping or continuing the trial based on summary statistics of these survival curves. Because the method is nonparametric, it can easily be used, for example, in situations where hazards cross or are suspected to cross and where relevant clinical decisions might be based on estimating when the integral between the curves might be expected to become positive and in favor of the new but toxic therapy. An example based on an actual trial illustrates the method. 相似文献
14.
We propose a Dirichlet process mixture model (DPMM) for the P-value distribution in a multiple testing problem. The DPMM allows us to obtain posterior estimates of quantities such as the proportion of true null hypothesis and the probability of rejection of a single hypothesis. We describe a Markov chain Monte Carlo algorithm for computing the posterior and the posterior estimates. We propose an estimator of the positive false discovery rate based on these posterior estimates and investigate the performance of the proposed estimator via simulation. We also apply our methodology to analyze a leukemia data set. 相似文献
15.
Carstens BC Brunsfeld SJ Demboski JR Good JM Sullivan J 《Evolution; international journal of organic evolution》2005,59(8):1639-1652
We examine the evolution of mesic forest ecosystems in the Pacific Northwest of North America using a statistical phylogeography approach in four animal and two plant lineages. Three a priori hypotheses, which explain the disjunction in the mesic forest ecosystem with either recent dispersal or ancient vicariance, are tested with phylogenetic and coalescent methods. We find strong support in three amphibian lineages (Ascaphus spp., and Dicampton spp., and Plethodon vandykei and P. idahoensis) for deep divergence between coastal and inland populations, as predicted by the ancient vicariance hypothesis. Unlike the amphibians, the disjunction in other Pacific Northwest lineages is likely due to recent dispersal along a northern route. Topological and population divergence tests support the northern dispersal hypothesis in the water vole (Microtus richardsoni) and northern dispersal has some support in both the dusky willow (Salix melanopsis) and whitebark pine (Pinus albicaulis). These analyses demonstrate that genetic data sampled from across an ecosystem can provide insight into the evolution of ecological communities and suggest that the advantages of a statistical phylogeographic approach are most pronounced in comparisons across multiple taxa in a particular ecosystem. Genetic patterns in organisms as diverse as willows and salamanders can be used to test general regional hypotheses, providing a consistent metric for comparison among members of an ecosystem with disparate life-history traits. 相似文献
16.
J. Antonio Baeza Raymond T. Bauer Junji Okuno Martin Thiel 《Zoological Journal of the Linnean Society》2014,172(2):426-450
The Rhynchocinetidae (‘hinge‐beak’ shrimps) is a family of marine caridean decapods with considerable variation in sexual dimorphism, male weaponry, mating tactics, and sexual systems. Thus, this group is an excellent model with which to analyse the evolution of these important characteristics, which are of interest not only in shrimps specifically but also in animal taxa in general. Yet, there exists no phylogenetic hypothesis, either molecular or morphological, for this taxon against which to test either the evolution of behavioural traits within the Rhynchocinetidae or its genealogical relationships with other caridean taxa. In this study, we tested (1) hypotheses on the phylogenetic relationships of rhynchocinetid shrimps, and (2) the efficacy of different (one‐, two‐, and three‐phase) methods to generate a reliable phylogeny. Total genomic DNA was extracted from tissue samples taken from 17 species of Rhynchocinetidae and five other species currently or previously assigned to the same superfamily (Nematocarcinoidea); six species from other superfamilies were used as outgroups. Sequences from two nuclear genes (H3 and Enolase) and one mitochondrial gene (12S) were used to construct phylogenies. One‐phase phylogenetic analyses (SATé‐II) and classical two‐ and three‐phase phylogenetic analyses were employed, using both maximum likelihood and Bayesian inference methods. Both a two‐gene data set (H3 and Enolase) and a three‐gene data set (H3, Enolase, 12S) were utilized to explore the relationships amongst the targeted species. These analyses showed that the superfamily Nematocarcinoidea, as currently accepted, is polyphyletic. Furthermore, the two major clades recognized by the SATé‐II analysis are clearly concordant with the genera Rhynchocinetes and Cinetorhynchus, which are currently recognized in the morphological‐based classification (implicit phylogeny) as composing the family Rhynchocinetidae. The SATé‐II method is considered superior to the other phylogenetic analyses employed, which failed to recognize these two major clades. Studies using more genes and a more complete species data set are needed to test yet unresolved inter‐ and intrafamilial systematic and evolutionary questions about this remarkable clade of caridean shrimps. © 2014 The Linnean Society of London 相似文献
17.
Bayesian curve fitting using multivariate normal mixtures 总被引:1,自引:0,他引:1
18.
Summary . In many clinical trials patients are intermittently assessed for the transition to an intermediate state, such as occurrence of a disease-related nonfatal event, and death. Estimation of the distribution of nonfatal event free survival time, that is, the time to the first occurrence of the nonfatal event or death, is the primary focus of the data analysis. The difficulty with this estimation is that the intermittent assessment of patients results in two forms of incompleteness: the times of occurrence of nonfatal events are interval censored and, when a nonfatal event does not occur by the time of the last assessment, a patient's nonfatal event status is not known from the time of the last assessment until the end of follow-up for death. We consider both forms of incompleteness within the framework of an \"illness–death\" model. We develop nonparametric maximum likelihood (ML) estimation in an \"illness–death\" model from interval-censored observations with missing status of intermediate transition. We show that the ML estimators are self-consistent and propose an algorithm for obtaining them. This work thus provides new methodology for the analysis of incomplete data that arise from clinical trials. We apply this methodology to the data from a recently reported cancer clinical trial ( Bonner et al., 2006 , New England Journal of Medicine 354, 567–578) and compare our estimation results with those obtained using a Food and Drug Administration recommended convention. 相似文献
19.
Negera Wakgari Deresa Ingrid Van Keilegom 《Biometrical journal. Biometrische Zeitschrift》2020,62(1):136-156
When modeling survival data, it is common to assume that the (log-transformed) survival time (T) is conditionally independent of the (log-transformed) censoring time (C) given a set of covariates. There are numerous situations in which this assumption is not realistic, and a number of correction procedures have been developed for different models. However, in most cases, either some prior knowledge about the association between T and C is required, or some auxiliary information or data is/are supposed to be available. When this is not the case, the application of many existing methods turns out to be limited. The goal of this paper is to overcome this problem by developing a flexible parametric model, that is a type of transformed linear model. We show that the association between T and C is identifiable in this model. The performance of the proposed method is investigated both in an asymptotic way and through finite sample simulations. We also develop a formal goodness-of-fit test approach to assess the quality of the fitted model. Finally, the approach is applied to data coming from a study on liver transplants. 相似文献
20.
JERROD A. BUTCHER JULIE E. GROCE CHRISTOPHER M. LITUMA M. CONSTANZA COCIMANO YARA SÁNCHEZ-JOHNSON ANDREW J. CAMPOMIZZI THERESA L. POPE KELLY S. REYNA ANNA C. S. KNIPPS 《The Journal of wildlife management》2007,71(7):2142-2144
ABSTRACT The controversy over the use of null hypothesis statistical testing (NHST) has persisted for decades, yet NHST remains the most widely used statistical approach in wildlife sciences and ecology. A disconnect exists between those opposing NHST and many wildlife scientists and ecologists who conduct and publish research. This disconnect causes confusion and frustration on the part of students. We, as students, offer our perspective on how this issue may be addressed. Our objective is to encourage academic institutions and advisors of undergraduate and graduate students to introduce students to various statistical approaches so we can make well-informed decisions on the appropriate use of statistical tools in wildlife and ecological research projects. We propose an academic course that introduces students to various statistical approaches (e.g., Bayesian, frequentist, Fisherian, information theory) to build a foundation for critical thinking in applying statistics. We encourage academic advisors to become familiar with the statistical approaches available to wildlife scientists and ecologists and thus decrease bias towards one approach. Null hypothesis statistical testing is likely to persist as the most common statistical analysis tool in wildlife science until academic institutions and student advisors change their approach and emphasize a wider range of statistical methods. 相似文献