首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Successful pharmaceutical drug development requires finding correct doses. The issues that conventional dose‐response analyses consider, namely whether responses are related to doses, which doses have responses differing from a control dose response, the functional form of a dose‐response relationship, and the dose(s) to carry forward, do not need to be addressed simultaneously. Determining if a dose‐response relationship exists, regardless of its functional form, and then identifying a range of doses to study further may be a more efficient strategy. This article describes a novel estimation‐focused Bayesian approach (BMA‐Mod) for carrying out the analyses when the actual dose‐response function is unknown. Realizations from Bayesian analyses of linear, generalized linear, and nonlinear regression models that may include random effects and covariates other than dose are optimally combined to produce distributions of important secondary quantities, including test‐control differences, predictive distributions of possible outcomes from future trials, and ranges of doses corresponding to target outcomes. The objective is similar to the objective of the hypothesis‐testing based MCP‐Mod approach, but provides more model and distributional flexibility and does not require testing hypotheses or adjusting for multiple comparisons. A number of examples illustrate the application of the method.  相似文献   

2.
Aims: To develop time‐dependent dose–response models for highly pathogenic avian influenza A (HPAI) of the H5N1 subtype virus. Methods and Results: A total of four candidate time‐dependent dose–response models were fitted to four survival data sets for animals (mice or ferrets) exposed to graded doses of HPAI H5N1 virus using the maximum‐likelihood estimation. A beta‐Poisson dose–response model with the N50 parameter modified by an exponential‐inverse‐power time dependency or an exponential dose–response model with the k parameter modified by an exponential‐inverse time dependency provided a statistically adequate fit to the observed survival data. Conclusions: We have successfully developed the time‐dependent dose–response models to describe the mortality of animals exposed to an HPAI H5N1 virus. The developed model describes the mortality over time and represents observed experimental responses accurately. Significance and Impact of the Study: This is the first study describing time‐dependent dose–response models for HPAI H5N1 virus. The developed models will be a useful tool for estimating the mortality of HPAI H5N1 virus, which may depend on time postexposure, for the preparation of a future influenza pandemic caused by this lethal virus.  相似文献   

3.
Sequential designs for phase I clinical trials which incorporate maximum likelihood estimates (MLE) as data accrue are inherently problematic because of limited data for estimation early on. We address this problem for small phase I clinical trials with ordinal responses. In particular, we explore the problem of the nonexistence of the MLE of the logistic parameters under a proportional odds model with one predictor. We incorporate the probability of an undetermined MLE as a restriction, as well as ethical considerations, into a proposed sequential optimal approach, which consists of a start‐up design, a follow‐on design and a sequential dose‐finding design. Comparisons with nonparametric sequential designs are also performed based on simulation studies with parameters drawn from a real data set.  相似文献   

4.
We present the one‐inflated zero‐truncated negative binomial (OIZTNB) model, and propose its use as the truncated count distribution in Horvitz–Thompson estimation of an unknown population size. In the presence of unobserved heterogeneity, the zero‐truncated negative binomial (ZTNB) model is a natural choice over the positive Poisson (PP) model; however, when one‐inflation is present the ZTNB model either suffers from a boundary problem, or provides extremely biased population size estimates. Monte Carlo evidence suggests that in the presence of one‐inflation, the Horvitz–Thompson estimator under the ZTNB model can converge in probability to infinity. The OIZTNB model gives markedly different population size estimates compared to some existing truncated count distributions, when applied to several capture–recapture data that exhibit both one‐inflation and unobserved heterogeneity.  相似文献   

5.
A dose response analysis is robustified by estimating the asymptotic covariance of the fitted model parameters by the approximate information sandwich (a sandwich statistic) under a heterogeneous variance. The robust method is described by using a nonlinear four‐parameter regression model. The usual, robust, bootstrap, and jackknife estimates of the asymptotic variance are examined for the bioassay data. Under the response of a normal distribution with changing variances over the dose levels, the performance of the usual and robust variances is investigated by Monte Carlo study. It confirms the robustness of the sandwich estimate and shows the non‐accuracy of the usual asymptotic variance estimates of fitted model parameters under the different forms of nonconstant variance structures.  相似文献   

6.
We consider the bivariate situation of some quantitative, ordinal, binary or censored response variable and some quantitative or ordinal exposure variable (dose) with a hypothetical effect on the response. Data can either be the outcome of a planned dose‐response experiment with only few dose levels or of an observational study where, for example, both exposure and response variable are observed within each individual. We are interested in testing the null hypothesis of no effect of the dose variable vs. a dose‐response function depending on an unknown ‘threshold’ parameter. The variety of dose‐response functions considered ranges from no observed effect level (NOEL) models to umbrella alternatives. Here we discuss generalizations of the method of Lausen & Schumacher (Biometrics, 1992, 48, 73–85)which are based on combinations of two‐sample rank statistics and rank statistics for trend. Our approach may be seen as a generalization of a proposal for change‐point problems. Using the approach of Davies (Biometrika, 1987, 74, 33–43)we derive and approximate the asymptotic null distribution for a large number of thresholds considered. We use an improved Bonferroni inequality as approximation for a small number of thresholds considered. Moreover, we analyse the small sample behaviour by means of a Monte‐Carlo study. Our paper is illustrated by examples from clinical research and epidemiology.  相似文献   

7.
Longitudinal studies frequently incur outcome-related nonresponse. In this article, we discuss a likelihood-based method for analyzing repeated binary responses when the mechanism leading to missing response data depends on unobserved responses. We describe a pattern-mixture model for the joint distribution of the vector of binary responses and the indicators of nonresponse patterns. Specifically, we propose an extension of the multivariate logistic model to handle nonignorable nonresponse. This method yields estimates of the mean parameters under a variety of assumptions regarding the distribution of the unobserved responses. Because these models make unverifiable identifying assumptions, we recommended conducting sensitivity analyses that provide a range of inferences, each of which is valid under different assumptions for nonresponse. The methodology is illustrated using data from a longitudinal study of obesity in children.  相似文献   

8.
Lei Xu  Jun Shao 《Biometrics》2009,65(4):1175-1183
Summary In studies with longitudinal or panel data, missing responses often depend on values of responses through a subject‐level unobserved random effect. Besides the likelihood approach based on parametric models, there exists a semiparametric method, the approximate conditional model (ACM) approach, which relies on the availability of a summary statistic and a linear or polynomial approximation to some random effects. However, two important issues must be addressed in applying ACM. The first is how to find a summary statistic and the second is how to estimate the parameters in the original model using estimates of parameters in ACM. Our study is to address these two issues. For the first issue, we derive summary statistics under various situations. For the second issue, we propose to use a grouping method, instead of linear or polynomial approximation to random effects. Because the grouping method is a moment‐based approach, the conditions we assumed in deriving summary statistics are weaker than the existing ones in the literature. When the derived summary statistic is continuous, we propose to use a classification tree method to obtain an approximate summary statistic for grouping. Some simulation results are presented to study the finite sample performance of the proposed method. An application is illustrated using data from the study of Modification of Diet in Renal Disease.  相似文献   

9.
Interval‐censored recurrent event data arise when the event of interest is not readily observed but the cumulative event count can be recorded at periodic assessment times. In some settings, chronic disease processes may resolve, and individuals will cease to be at risk of events at the time of disease resolution. We develop an expectation‐maximization algorithm for fitting a dynamic mover‐stayer model to interval‐censored recurrent event data under a Markov model with a piecewise‐constant baseline rate function given a latent process. The model is motivated by settings in which the event times and the resolution time of the disease process are unobserved. The likelihood and algorithm are shown to yield estimators with small empirical bias in simulation studies. Data are analyzed on the cumulative number of damaged joints in patients with psoriatic arthritis where individuals experience disease remission.  相似文献   

10.
In this article, we describe a conditional score test for detecting a monotone dose‐response relationship with ordinal response data. We consider three different versions of this test: asymptotic, conditional exact, and mid‐P conditional score test. Exact and asymptotic power formulae based on these tests will be studied. Asymptotic sample size formulae based on the asymptotic conditional score test will be derived. The proposed formulae are applied to a vaccination study and a developmental toxicity study for illustrative purposes. Actual significance level and exact power properties of these tests are compared in a small empirical study. The mid‐P conditional score test is observed to be the most powerful test with actual significance level close to the pre‐specified nominal level.  相似文献   

11.
Estimating nonlinear dose‐response relationships in the context of pharmaceutical clinical trials is often a challenging problem. The data in these trials are typically variable and sparse, making this a hard inference problem, despite sometimes seemingly large sample sizes. Maximum likelihood estimates often fail to exist in these situations, while for Bayesian methods, prior selection becomes a delicate issue when no carefully elicited prior is available, as the posterior distribution will often be sensitive to the priors chosen. This article provides guidance on the usage of functional uniform prior distributions in these situations. The essential idea of functional uniform priors is to employ a distribution that weights the functional shapes of the nonlinear regression function equally. By doing so one obtains a distribution that exhaustively and uniformly covers the underlying potential shapes of the nonlinear function. On the parameter scale these priors will often result in quite nonuniform prior distributions. This paper gives hints on how to implement these priors in practice and illustrates them in realistic trial examples in the context of Phase II dose‐response trials as well as Phase I first‐in‐human studies.  相似文献   

12.
Data for relationships between in vivo doses inferred from levels of hemoglobin (Hb) or DNA adducts and administered (by inhalation or injection) doses of ethylene oxide (EO) in mice, rats and humans are reviewed. At low absorbed doses or dose rates these relationships appear to be linear, whereas at higher dose rates deviations from linearity due to saturation kinetics of detoxification and of DNA repair as well as certain toxic effects have to be allowed for. If these factors are taken into consideration, a rather consistent picture is obtained for animal studies, with a variation by less than a factor 2 between estimates of adduct level increments or in vivo dose increments per unit of administered dose. Although the value for in vivo dose per unit of exposure dose (ppm-hour) in humans is uncertain because of unreliable data for the time-weighted average exposure level, the most likely value for this relationship, supported by data for ethene, agrees with data for the rodents. In the animal species testis doses are approximately one-half of the blood doses inferred from Hb adducts.  相似文献   

13.
Summary We discuss design and analysis of longitudinal studies after case–control sampling, wherein interest is in the relationship between a longitudinal binary response that is related to the sampling (case–control) variable, and a set of covariates. We propose a semiparametric modeling framework based on a marginal longitudinal binary response model and an ancillary model for subjects' case–control status. In this approach, the analyst must posit the population prevalence of being a case, which is then used to compute an offset term in the ancillary model. Parameter estimates from this model are used to compute offsets for the longitudinal response model. Examining the impact of population prevalence and ancillary model misspecification, we show that time‐invariant covariate parameter estimates, other than the intercept, are reasonably robust, but intercept and time‐varying covariate parameter estimates can be sensitive to such misspecification. We study design and analysis issues impacting study efficiency, namely: choice of sampling variable and the strength of its relationship to the response, sample stratification, choice of working covariance weighting, and degree of flexibility of the ancillary model. The research is motivated by a longitudinal study following case–control sampling of the time course of attention deficit hyperactivity disorder (ADHD) symptoms.  相似文献   

14.
Little attention has been paid to the use of multi‐sample batch‐marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. ( 2010 ) present a pseudo‐likelihood for a multi‐sample batch‐marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson‐type estimator. We have developed and maximized the likelihood for batch‐marking studies. We use data simulated from a Jolly–Seber‐type study and convert this to what would have been obtained from an extended batch‐marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch‐marking model to determine the efficiency of collecting and analyzing batch‐marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo‐likelihood method of Huggins et al. ( 2010 ). When faced with designing a batch‐marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size.  相似文献   

15.
Summary We estimate the parameters of a stochastic process model for a macroparasite population within a host using approximate Bayesian computation (ABC). The immunity of the host is an unobserved model variable and only mature macroparasites at sacrifice of the host are counted. With very limited data, process rates are inferred reasonably precisely. Modeling involves a three variable Markov process for which the observed data likelihood is computationally intractable. ABC methods are particularly useful when the likelihood is analytically or computationally intractable. The ABC algorithm we present is based on sequential Monte Carlo, is adaptive in nature, and overcomes some drawbacks of previous approaches to ABC. The algorithm is validated on a test example involving simulated data from an autologistic model before being used to infer parameters of the Markov process model for experimental data. The fitted model explains the observed extra‐binomial variation in terms of a zero‐one immunity variable, which has a short‐lived presence in the host.  相似文献   

16.
Kottas A  Branco MD  Gelfand AE 《Biometrics》2002,58(3):593-600
In cytogenetic dosimetry, samples of cell cultures are exposed to a range of doses of a given agent. In each sample at each dose level, some measure of cell disability is recorded. The objective is to develop models that explain cell response to dose. Such models can be used to predict response at unobserved doses. More important, such models can provide inference for unknown exposure doses given the observed responses. Typically, cell disability is viewed as a Poisson count, but in the present work, a more appropriate response is a categorical classification. In the literature, modeling in this case is very limited. What exists is purely parametric. We propose a fully Bayesian nonparametric approach to this problem. We offer comparison with a parametric model through a simulation study and the analysis of a real dataset modeling blood cultures exposed to radiation where classification is with regard to number of micronuclei per cell.  相似文献   

17.
Benchmark analysis is a widely used tool in biomedical and environmental risk assessment. Therein, estimation of minimum exposure levels, called benchmark doses (BMDs), that induce a prespecified benchmark response (BMR) is well understood for the case of an adverse response to a single stimulus. For cases where two agents are studied in tandem, however, the benchmark approach is far less developed. This paper demonstrates how the benchmark modeling paradigm can be expanded from the single‐agent setting to joint‐action, two‐agent studies. Focus is on continuous response outcomes. Extending the single‐exposure setting, representations of risk are based on a joint‐action dose–response model involving both agents. Based on such a model, the concept of a benchmark profile—a two‐dimensional analog of the single‐dose BMD at which both agents achieve the specified BMR—is defined for use in quantitative risk characterization and assessment.  相似文献   

18.
Summary In this article, we propose a Bayesian approach to dose–response assessment and the assessment of synergy between two combined agents. We consider the case of an in vitro ovarian cancer research study aimed at investigating the antiproliferative activities of four agents, alone and paired, in two human ovarian cancer cell lines. In this article, independent dose–response experiments were repeated three times. Each experiment included replicates at investigated dose levels including control (no drug). We have developed a Bayesian hierarchical nonlinear regression model that accounts for variability between experiments, variability within experiments (i.e., replicates), and variability in the observed responses of the controls. We use Markov chain Monte Carlo to fit the model to the data and carry out posterior inference on quantities of interest (e.g., median inhibitory concentration IC 50 ). In addition, we have developed a method, based on Loewe additivity, that allows one to assess the presence of synergy with honest accounting of uncertainty. Extensive simulation studies show that our proposed approach is more reliable in declaring synergy compared to current standard analyses such as the median‐effect principle/combination index method ( Chou and Talalay, 1984 , Advances in Enzyme Regulation 22, 27–55), which ignore important sources of variability and uncertainty.  相似文献   

19.
Jing Qin  Yu Shen 《Biometrics》2010,66(2):382-392
Summary Length‐biased time‐to‐event data are commonly encountered in applications ranging from epidemiological cohort studies or cancer prevention trials to studies of labor economy. A longstanding statistical problem is how to assess the association of risk factors with survival in the target population given the observed length‐biased data. In this article, we demonstrate how to estimate these effects under the semiparametric Cox proportional hazards model. The structure of the Cox model is changed under length‐biased sampling in general. Although the existing partial likelihood approach for left‐truncated data can be used to estimate covariate effects, it may not be efficient for analyzing length‐biased data. We propose two estimating equation approaches for estimating the covariate coefficients under the Cox model. We use the modern stochastic process and martingale theory to develop the asymptotic properties of the estimators. We evaluate the empirical performance and efficiency of the two methods through extensive simulation studies. We use data from a dementia study to illustrate the proposed methodology, and demonstrate the computational algorithms for point estimates, which can be directly linked to the existing functions in S‐PLUS or R .  相似文献   

20.
Multistate capture‐recapture models are a powerful tool to address a variety of biological questions concerning dispersal and/or individual variability in wild animal populations. However, biologically meaningful models are often over‐parameterized and consequently some parameters cannot be estimated separately. Identifying which quantities are separately estimable is crucial for proper model selection based upon likelihood tests or information criteria and for the interpretation of the estimates obtained. We show how to investigate parameter redundancy in multistate capture‐recapture models, based on formal methods initially proposed by Catchpole and his associates for exponential family distributions (Catchpole, Freeman and Morgan, 1996. Journal of the Royal Statistical Society Series B 58, 763–774). We apply their approach to three models of increasing complexity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号