首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In risk assessment, it is often desired to make inferences on the low dose levels at which a specific benchmark risk is attained. Applications of simultaneous hyperbolic confidence bands for low‐dose risk estimation with quantal data under different dose‐response models (multistage, Abbott‐adjusted Weibull, and Abbott‐adjusted log‐logistic models) have appeared in the literature. The use of simultaneous three‐segment bands under the multistage model has also been proposed recently. In this article, we present explicit formulas for constructing asymptotic one‐sided simultaneous hyperbolic and three‐segment bands for the simple log‐logistic regression model. We use the simultaneous construction to estimate upper hyperbolic and three‐segment confidence bands on extra risk and to obtain lower limits on the benchmark dose by inverting the upper bands on risk under the Abbott‐adjusted log‐logistic model. Monte Carlo simulations evaluate the characteristics of the simultaneous limits. An example is given to illustrate the use of the proposed methods and to compare the two types of simultaneous limits at very low dose levels.  相似文献   

2.
Inference after two‐stage single‐arm designs with binary endpoint is challenging due to the nonunique ordering of the sampling space in multistage designs. We illustrate the problem of specifying test‐compatible confidence intervals for designs with nonconstant second‐stage sample size and present two approaches that guarantee confidence intervals consistent with the test decision. Firstly, we extend the well‐known Clopper–Pearson approach of inverting a family of two‐sided hypothesis tests from the group‐sequential case to designs with fully adaptive sample size. Test compatibility is achieved by using a sample space ordering that is derived from a test‐compatible estimator. The resulting confidence intervals tend to be conservative but assure the nominal coverage probability. In order to assess the possibility of further improving these confidence intervals, we pursue a direct optimization approach minimizing the mean width of the confidence intervals. While the latter approach produces more stable coverage probabilities, it is also slightly anti‐conservative and yields only negligible improvements in mean width. We conclude that the Clopper–Pearson‐type confidence intervals based on a test‐compatible estimator are the best choice if the nominal coverage probability is not to be undershot and compatibility of test decision and confidence interval is to be preserved.  相似文献   

3.
We propose new resampling‐based approaches to construct asymptotically valid time‐simultaneous confidence bands for cumulative hazard functions in multistate Cox models. In particular, we exemplify the methodology in detail for the simple Cox model with time‐dependent covariates, where the data may be subject to independent right‐censoring or left‐truncation. We use simulations to investigate their finite sample behavior. Finally, the methods are utilized to analyze two empirical examples with survival and competing risks data.  相似文献   

4.
The problem of finding exact simultaneous confidence bounds for differences in regression models for k groups via the union‐intersection method is considered. The error terms are taken to be iid normal random variables. Under an assumption slightly more general than having identical design matrices for each of the k groups, it is shown that an existing probability point for the multivariate studentized range can be used to find the necessary probability point for pairwise comparisons of regression models. The resulting methods can be used with simple or multiple regression. Under a weaker assumption on the k design matrices that allows more observations to be taken from the control group than from the k‐1 treatment groups, a method is developed for computing exact probability points for comparing the simple linear regression models of the k‐1 groups to that of the control. Within a class of designs, the optimal design for comparisons with a control takes the square root of (k‐1) times as many observations from the control than from each treatment group. The simultaneous confidence bounds for all pairwise differences and for comparisons with a control are much narrower than Spurrier's intervals for all contrasts of k regression lines.  相似文献   

5.
6.
Multiple‐dose factorial designs may provide confirmatory evidence that (fixed) combination drugs are superior to either component drug alone. Moreover, a useful and safe range of dose combinations may be identified. In our study, we focus on (A) adjustments of the overall significance level made necessary by multiple testing, (B) improvement of conventional statistical methods with respect to power, distributional assumptions and dimensionality, and (C) construction of corresponding simultaneous confidence intervals. We propose novel resampling algorithms, which in a simple way take the correlation of multiple test statistics into account, thus improving power. Moreover, these algorithms can easily be extended to combinations of more than two component drugs and binary outcome data. Published data summaries from a blood pressure reduction trial are analysed and presented as a worked example. An implementation of the proposed methods is available online as an R package.  相似文献   

7.
Certain protein‐design calculations involve using an experimentally determined high‐resolution structure as a template to identify new sequences that can adopt the same fold. This approach has led to the successful design of many novel, well‐folded, native‐like proteins. Although any atomic‐resolution structure can serve as a template in such calculations, most successful designs have used high‐resolution crystal structures. Because there are many proteins for which crystal structures are not available, it is of interest whether nuclear magnetic resonance (NMR) templates are also appropriate. We have analyzed differences between using X‐ray and NMR templates in side‐chain repacking and design calculations. We assembled a database of 29 proteins for which both a high‐resolution X‐ray structure and an ensemble of NMR structures are available. Using these pairs, we compared the rotamericity, χ1‐angle recovery, and native‐sequence recovery of X‐ray and NMR templates. We carried out design using RosettaDesign on both types of templates, and compared the energies and packing qualities of the resulting structures. Overall, the X‐ray structures were better templates for use with Rosetta. However, for ~20% of proteins, a member of the reported NMR ensemble gave rise to designs with similar properties. Re‐evaluating RosettaDesign structures with other energy functions indicated much smaller differences between the two types of templates. Ultimately, experiments are required to confirm the utility of particular X‐ray and NMR templates. But our data suggest that the lack of a high‐resolution X‐ray structure should not preclude attempts at computational design if an NMR ensemble is available. Proteins 2009. © 2009 Wiley‐Liss, Inc.  相似文献   

8.
Multistate models can be successfully used for describing complex event history data, for example, describing stages in the disease progression of a patient. The so‐called “illness‐death” model plays a central role in the theory and practice of these models. Many time‐to‐event datasets from medical studies with multiple end points can be reduced to this generic structure. In these models one important goal is the modeling of transition rates but biomedical researchers are also interested in reporting interpretable results in a simple and summarized manner. These include estimates of predictive probabilities, such as the transition probabilities, occupation probabilities, cumulative incidence functions, and the sojourn time distributions. We will give a review of some of the available methods for estimating such quantities in the progressive illness‐death model conditionally (or not) on covariate measures. For some of these quantities estimators based on subsampling are employed. Subsampling, also referred to as landmarking, leads to small sample sizes and usually to heavily censored data leading to estimators with higher variability. To overcome this issue estimators based on a preliminary estimation (presmoothing) of the probability of censoring may be used. Among these, the presmoothed estimators for the cumulative incidences are new. We also introduce feasible estimation methods for the cumulative incidence function conditionally on covariate measures. The proposed methods are illustrated using real data. A comparative simulation study of several estimation approaches is performed and existing software in the form of R packages is discussed.  相似文献   

9.
Statistical models support medical research by facilitating individualized outcome prognostication conditional on independent variables or by estimating effects of risk factors adjusted for covariates. Theory of statistical models is well‐established if the set of independent variables to consider is fixed and small. Hence, we can assume that effect estimates are unbiased and the usual methods for confidence interval estimation are valid. In routine work, however, it is not known a priori which covariates should be included in a model, and often we are confronted with the number of candidate variables in the range 10–30. This number is often too large to be considered in a statistical model. We provide an overview of various available variable selection methods that are based on significance or information criteria, penalized likelihood, the change‐in‐estimate criterion, background knowledge, or combinations thereof. These methods were usually developed in the context of a linear regression model and then transferred to more generalized linear models or models for censored survival data. Variable selection, in particular if used in explanatory modeling where effect estimates are of central interest, can compromise stability of a final model, unbiasedness of regression coefficients, and validity of p‐values or confidence intervals. Therefore, we give pragmatic recommendations for the practicing statistician on application of variable selection methods in general (low‐dimensional) modeling problems and on performing stability investigations and inference. We also propose some quantities based on resampling the entire variable selection process to be routinely reported by software packages offering automated variable selection algorithms.  相似文献   

10.
Scientists often need to test hypotheses and construct corresponding confidence intervals. In designing a study to test a particular null hypothesis, traditional methods lead to a sample size large enough to provide sufficient statistical power. In contrast, traditional methods based on constructing a confidence interval lead to a sample size likely to control the width of the interval. With either approach, a sample size so large as to waste resources or introduce ethical concerns is undesirable. This work was motivated by the concern that existing sample size methods often make it difficult for scientists to achieve their actual goals. We focus on situations which involve a fixed, unknown scalar parameter representing the true state of nature. The width of the confidence interval is defined as the difference between the (random) upper and lower bounds. An event width is said to occur if the observed confidence interval width is less than a fixed constant chosen a priori. An event validity is said to occur if the parameter of interest is contained between the observed upper and lower confidence interval bounds. An event rejection is said to occur if the confidence interval excludes the null value of the parameter. In our opinion, scientists often implicitly seek to have all three occur: width, validity, and rejection. New results illustrate that neglecting rejection or width (and less so validity) often provides a sample size with a low probability of the simultaneous occurrence of all three events. We recommend considering all three events simultaneously when choosing a criterion for determining a sample size. We provide new theoretical results for any scalar (mean) parameter in a general linear model with Gaussian errors and fixed predictors. Convenient computational forms are included, as well as numerical examples to illustrate our methods.  相似文献   

11.
We study the use of simultaneous confidence bands for low-dose risk estimation with quantal response data, and derive methods for estimating simultaneous upper confidence limits on predicted extra risk under a multistage model. By inverting the upper bands on extra risk, we obtain simultaneous lower bounds on the benchmark dose (BMD). Monte Carlo evaluations explore characteristics of the simultaneous limits under this setting, and a suite of actual data sets are used to compare existing methods for placing lower limits on the BMD.  相似文献   

12.
The derivation of simultaneous confidence regions for some multiple‐testing procedures (MTPs) of practical interest has remained an unsolved problem. This is the case, for example, for Hochberg's step‐up MTP and Hommel's more powerful MTP that is neither a step‐up nor a step‐down procedure. It is shown in this article how the direct approach used previously by the author to construct confidence regions for certain closed‐testing procedures (CTPs) can be extended to a rather general setup. The general results are then applied to a situation with one‐sided inferences and CTPs belonging to a class studied by Wei Liu. This class consists of CTPs based on ordered marginal p‐values. It includes Holm's, Hochberg's, and Hommel's MTPs. A property of the confidence regions derived for these three MTPs is that no confidence assertions sharper than rejection assertions can be made unless all null hypotheses are rejected. Briefly, this is related to the fact that these MTPs are quite powerful. The class of CTPs considered includes, however, also MTPs related to Holm's, Hochberg's, and Hommel's MTPs that are less powerful but are such that confidence assertions sharper than rejection assertions are possible even if not all null hypotheses are rejected. One may thus choose and prespecify such an MTP, though this is at the cost of less rejection power.  相似文献   

13.
ABSTRACT Flipper bands are used to mark penguins because leg bands can injure their legs. However, concerns remain over the possible effects of flipper bands on penguins. We examined the effects of stainless‐steel flipper bands on the duration of foraging trips by Magellanic Penguins (Spheniscus magellanicus) at Punta Tombo, Argentina, using an automated detection system. We predicted that, if bands were costly and increased drag, flipper‐banded penguins would make longer foraging trips than those with small or no external markings. We tagged 121 penguins with radio‐frequency identification (RFID) tags and an additional external mark. We placed either a stainless‐steel band on the left flipper (N= 62) or a 2×10‐mm small‐animal ear tag in the outside web of the left foot (N= 59). We measured foraging‐trip durations (N= 376 trips) for 68 adult penguins with chicks from 15 December 2007 to 28 February 2008. Contrary to predictions, trip duration was similar for banded and web‐tagged penguins (P= 0.22) and for males and females (P= 0.52), with no interaction between tag type and sex (P= 0.52). No penguins marked in the 2007 breeding season and recaptured between 30 September and 30 November 2008 (N= 113) lost flipper bands or web tags, but three RFID tags failed between March and September 2008. Properly designed and applied flipper bands were a reliable marking method for Magellanic Penguins, had a lower failure rate than RFIDs, and did not affect foraging‐trip duration.  相似文献   

14.
Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time‐dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real‐life analyses to estimate nonlinear and time‐dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real‐life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure.  相似文献   

15.
In randomized trials with imperfect compliance, it is sometimes recommended to supplement the intention‐to‐treat estimate with an instrumental variable (IV) estimate, which is consistent for the effect of treatment administration in those subjects who would get treated if randomized to treatment and would not get treated if randomized to control. The IV estimation however has been criticized for its reliance on simultaneous existence of complementary “fatalistic” compliance states. The objective of the present paper is to identify some sufficient conditions for consistent estimation of treatment effects in randomized trials with stochastic compliance. It is shown that in the stochastic framework, the classical IV estimator is generally inconsistent for the population‐averaged treatment effect. However, even under stochastic compliance, with certain common experimental designs the IV estimator and a simple alternative estimator can be used for consistent estimation of the effect of treatment administration in well‐defined and identifiable subsets of the study population.  相似文献   

16.
The search for a systems‐level picture of metabolism as a web of molecular interactions provides a paradigmatic example of how the methods used to characterize a system can bias the interpretation of its functional meaning. Metabolic maps have been analyzed using novel techniques from network theory, revealing some non‐trivial, functionally relevant properties. These include a small‐world structure and hierarchical modularity. However, as discussed here, some of these properties might actually result from an inappropriate way of defining network interactions. Starting from the so‐called bipartite organization of metabolism, where the two meaningful subsets (reactions and metabolites) are considered, most current works use only one of the subsets by means of so‐called graph projections. Unfortunately, projected graphs often ignore relevant biological and chemical constraints, thus leading to statistical artifacts. Some of these drawbacks and alternative approaches need to be properly addressed.  相似文献   

17.
In cohort studies the outcome is often time to a particular event, and subjects are followed at regular intervals. Periodic visits may also monitor a secondary irreversible event influencing the event of primary interest, and a significant proportion of subjects develop the secondary event over the period of follow‐up. The status of the secondary event serves as a time‐varying covariate, but is recorded only at the times of the scheduled visits, generating incomplete time‐varying covariates. While information on a typical time‐varying covariate is missing for entire follow‐up period except the visiting times, the status of the secondary event are unavailable only between visits where the status has changed, thus interval‐censored. One may view interval‐censored covariate of the secondary event status as missing time‐varying covariates, yet missingness is partial since partial information is provided throughout the follow‐up period. Current practice of using the latest observed status produces biased estimators, and the existing missing covariate techniques cannot accommodate the special feature of missingness due to interval censoring. To handle interval‐censored covariates in the Cox proportional hazards model, we propose an available‐data estimator, a doubly robust‐type estimator as well as the maximum likelihood estimator via EM algorithm and present their asymptotic properties. We also present practical approaches that are valid. We demonstrate the proposed methods using our motivating example from the Northern Manhattan Study.  相似文献   

18.
Interval‐censored recurrent event data arise when the event of interest is not readily observed but the cumulative event count can be recorded at periodic assessment times. In some settings, chronic disease processes may resolve, and individuals will cease to be at risk of events at the time of disease resolution. We develop an expectation‐maximization algorithm for fitting a dynamic mover‐stayer model to interval‐censored recurrent event data under a Markov model with a piecewise‐constant baseline rate function given a latent process. The model is motivated by settings in which the event times and the resolution time of the disease process are unobserved. The likelihood and algorithm are shown to yield estimators with small empirical bias in simulation studies. Data are analyzed on the cumulative number of damaged joints in patients with psoriatic arthritis where individuals experience disease remission.  相似文献   

19.
Glidden DV 《Biometrics》2002,58(2):361-368
Multistate event data, in which a single subject is at risk for multiple events, is common in biomedical applications. This article considers nonparametric estimation of the vector of probabilities of state membership at time t. Estimators, derived under the Markov assumption, have been shown (Datta and Satten, 2001, Statistics and Probability Letters 55, 403-411) to be consistent for data that is non-Markov. Inference, however, must take into account possibly non-Markov transitions when constructing confidence bands for event curves. We develop robust confidence bands for these curves, evaluate them via simulation, and illustrate the method on two datasets.  相似文献   

20.
Drop-the-losers designs are statistical designs which have two stages of a trial separated by a data based decision. In the first stage k experimental treatments and a control are administered. During a transition period, the empirically best experimental treatment is selected for continuation into the second phase, along with the control. At the study's end, inference focuses on the comparison of the selected treatment with the control using both stages' data. Traditional methods used to make inferences based on both stages' data can yield tests with higher than advertised levels of significance and confidence intervals with lower than advertised confidence. For normally distributed data, methods are provided to correct these deficiencies, providing confidence intervals with accurate levels of confidence. Drop-the-losers designs are particularly applicable to biopharmaceutical clinical trials where they can allow Phase II and Phase III clinical trials to be conducted under a single protocol with the use of all available data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号