首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In some clinical trials or clinical practice, the therapeutic agent is administered repeatedly, and doses are adjusted in each patient based on repeatedly measured continuous responses, to maintain the response levels in a target range. Because a lower dose tends to be selected for patients with a better outcome, simple summarizations may wrongly show a better outcome for the lower dose, producing an incorrect dose–response relationship. In this study, we consider the dose–response relationship under these situations. We show that maximum‐likelihood estimates are consistent without modeling the dose‐modification mechanisms when the selection of the dose as a time‐dependent covariate is based only on observed, but not on unobserved, responses, and measurements are generated based on administered doses. We confirmed this property by performing simulation studies under several dose‐modification mechanisms. We examined an autoregressive linear mixed effects model. The model represents profiles approaching each patient's asymptote when identical doses are repeatedly administered. The model takes into account the previous dose history and provides a dose–response relationship of the asymptote as a summary measure. We also examined a linear mixed effects model assuming all responses are measured at steady state. In the simulation studies, the estimates of both the models were unbiased under the dose modification based on observed responses, but biased under the dose modification based on unobserved responses. In conclusion, the maximum‐likelihood estimates of the dose–response relationship are consistent under the dose modification based only on observed responses.  相似文献   

2.
In some occupational health studies, observations occur in both exposed and unexposed individuals. If the levels of all exposed individuals have been detected, a two-part zero-inflated log-normal model is usually recommended, which assumes that the data has a probability mass at zero for unexposed individuals and a continuous response for values greater than zero for exposed individuals. However, many quantitative exposure measurements are subject to left censoring due to values falling below assay detection limits. A zero-inflated log-normal mixture model is suggested in this situation since unexposed zeros are not distinguishable from those exposed with values below detection limits. In the context of this mixture distribution, the information contributed by values falling below a fixed detection limit is used only to estimate the probability of unexposed. We consider sample size and statistical power calculation when comparing the median of exposed measurements to a regulatory limit. We calculate the required sample size for the data presented in a recent paper comparing the benzene TWA exposure data to a regulatory occupational exposure limit. A simulation study is conducted to investigate the performance of the proposed sample size calculation methods.  相似文献   

3.
Quan H  Capizzi T 《Biometrics》1999,55(2):460-462
Studies using a series of increasing doses of a compound, including a zero dose control, are often conducted to study the effect of the compound on the response of interest. For a one-way design, Tukey et al. (1985, Biometrics 41, 295-301) suggested assessing trend by examining the slopes of regression lines under arithmetic, ordinal, and arithmetic-logarithmic dose scalings. They reported the smallest p-value for the three significance tests on the three slopes for safety assessments. Capizzi et al. (1992, Biometrical Journal 34, 275-289) suggested an adjusted trend test, which adjusts the p-value using a trivariate t-distribution, the joint distribution of the three slope estimators. In this paper, we propose an adjusted regression trend test suitable for two-way designs, particularly for multicenter clinical trials. In a step-down fashion, the proposed trend test can be applied to a multicenter clinical trial to compare each dose with the control. This sequential procedure is a closed testing procedure for a trend alternative. Therefore, it adjusts p-values and maintains experimentwise error rate. Simulation results show that the step-down trend test is overall more powerful than a step-down least significant difference test.  相似文献   

4.
We propose a multiple comparison procedure to identify the minimum effective dose level by sequentially comparing each dose level with the zero dose level in the dose finding test. If we can find the minimum effective dose level at an early stage in the sequential test, it is possible to terminate the procedure in the dose finding test after a few group observations up to the dose level. Thus, the procedure is viable from an economical point of view when high costs are involved in obtaining the observations. In the procedure, we present an integral formula to determine the critical values for satisfying a predefined type I familywise error rate. Furthermore, we show how to determine the required sample size in order to guarantee the power of the test in the procedure. In practice, we compare the power of the test and the required sample size for various configurations of the population means in simulation studies and adopt our sequential procedure to the dose response test in a case study.  相似文献   

5.
Researchers are often interested in predicting outcomes, detecting distinct subgroups of their data, or estimating causal treatment effects. Pathological data distributions that exhibit skewness and zero‐inflation complicate these tasks—requiring highly flexible, data‐adaptive modeling. In this paper, we present a multipurpose Bayesian nonparametric model for continuous, zero‐inflated outcomes that simultaneously predicts structural zeros, captures skewness, and clusters patients with similar joint data distributions. The flexibility of our approach yields predictions that capture the joint data distribution better than commonly used zero‐inflated methods. Moreover, we demonstrate that our model can be coherently incorporated into a standardization procedure for computing causal effect estimates that are robust to such data pathologies. Uncertainty at all levels of this model flow through to the causal effect estimates of interest—allowing easy point estimation, interval estimation, and posterior predictive checks verifying positivity, a required causal identification assumption. Our simulation results show point estimates to have low bias and interval estimates to have close to nominal coverage under complicated data settings. Under simpler settings, these results hold while incurring lower efficiency loss than comparator methods. We use our proposed method to analyze zero‐inflated inpatient medical costs among endometrial cancer patients receiving either chemotherapy or radiation therapy in the SEER‐Medicare database.  相似文献   

6.
In this paper we introduce a simple framework which provides a basis for estimating parameters and testing statistical hypotheses in complex models. The only assumption that is made in the model describing the process under study, is that the deviations of the observations from the model have a multivariate normal distribution. The application of the statistical techniques presented in this paper may have considerable utility in the analysis of a wide variety of complex biological and epidemiological models. To our knowledge, the model and methods described here have not previously been published in the area of theoretical immunology.  相似文献   

7.
Activation of naive T and B cells occurs only within the context of organized lymphoid tissue. Thus, the continuous recirculation of mature lymphocytes is crucial for the development of primary immune response to foreign Ags. We have previously shown that low levels of IFN-gamma inhibit homing of B cells to the secondary lymphoid organs. In this study, we demonstrate that similarly low doses of IFN-gamma down-regulate integrin-mediated adhesion and migration of naive T and Th2 cells, and have a profound effect on the in vivo homing of naive T cells to the lymph nodes. Moreover, we show that these low doses of IFN-gamma have anti-inflammatory effects in an in vivo asthma model. Thus, in contrast to the proinflammatory effects of IFN-gamma at relatively high concentrations, low dose IFN-gamma appears to exert global suppressory effects on T cell trafficking and may have clinical application as an anti-inflammatory agent.  相似文献   

8.
Obtaining a correct dose–response relationship for radiation-induced cancer after radiotherapy presents a major challenge for epidemiological studies. The purpose of this paper is to gain a better understanding of the associated uncertainties. To accomplish this goal, some aspects of an epidemiological study on breast cancer following radiotherapy of Hodgkin’s disease were simulated with Monte Carlo methods. It is demonstrated that although the doses to the breast volume are calculated by one treatment plan, the locations and sizes of the induced secondary breast tumours can be simulated and, based on these simulated locations and sizes, the absorbed doses at the site of tumour incidence can also be simulated. For the simulations of point dose at tumour site, linear and non-linear mechanistic models which predict risk of cancer induction as a function of dose were applied randomly to the treatment plan. These simulations provided for each second tumour and each simulated tumour size the predicted dose. The predicted-dose–response-characteristic from the analysis of the simulated epidemiological study was analysed. If a linear dose–response relationship for cancer induction was applied to calculate the theoretical doses at the simulated tumour sites, all Monte-Carlo realizations of the epidemiological study yielded strong evidence for a resulting linear risk to predicted-dose–response. However, if a non-linear dose–response of cancer induction was applied to calculate the theoretical doses, the Monte Carlo simulated epidemiological study resulted in a non-linear risk to predicted-dose–response relationship only if the tumour size was small (<?1.5 cm). If the diagnosed breast tumours exceeded an average diameter of 1.5 cm, an applied non-linear theoretical-dose–response relationship for second cancer falsely resulted in strong evidence for a linear predicted-dose relationship from the epidemiological study realizations. For a typical distribution of breast cancer sizes, the model selection probability for a resulting predicted-dose linear model was 61% although a non-linear theoretical-dose–response relationship for cancer induction had been applied. The results of this study, therefore, provide evidence that the shapes of epidemiologically obtained dose–response relationships for cancer induction can be biased by the finite size of the diagnosed second tumour, even though the epidemiological study was done correctly.  相似文献   

9.
Na Li  Harry Yang 《Biologicals》2012,40(6):439-444
Since most biological products are derived from living cell culture, it is possible that viral contaminants be transmitted to the final product. Regulatory guidance requires that viral clearance studies be conducted to demonstrate the capacity of the production process in viral removal and inactivation. The key is accurate estimation of viral titer and reduction factor (RF), defined as the difference in log10 virus titers before and after each step of purification. Darling et al. (1998) [1] suggested a method for analysis of clearance studies. However it is unable to establish an estimate of RF when the post-process viral counts are zero. In this paper, we provide theoretical justification of the method based on normal distribution and discuss the caveats regarding the degrees of freedom. We propose two alternative methods under the assumption that the number of plaques follows a Poisson distribution. Through simulation studies, the Poisson-based methods are shown to provide better estimates of viral titers. Under the Poisson model, we also derive a method to calculate the exact confidence limits for the viral titer and reduction factor even if the post-process viral counts are zero. The use of the methods is illustrated through numerical examples.  相似文献   

10.
Estimation of a population size by means of capture‐recapture techniques is an important problem occurring in many areas of life and social sciences. We consider the frequencies of frequencies situation, where a count variable is used to summarize how often a unit has been identified in the target population of interest. The distribution of this count variable is zero‐truncated since zero identifications do not occur in the sample. As an application we consider the surveillance of scrapie in Great Britain. In this case study holdings with scrapie that are not identified (zero counts) do not enter the surveillance database. The count variable of interest is the number of scrapie cases per holding. For count distributions a common model is the Poisson distribution and, to adjust for potential heterogeneity, a discrete mixture of Poisson distributions is used. Mixtures of Poissons usually provide an excellent fit as will be demonstrated in the application of interest. However, as it has been recently demonstrated, mixtures also suffer under the so‐called boundary problem, resulting in overestimation of population size. It is suggested here to select the mixture model on the basis of the Bayesian Information Criterion. This strategy is further refined by employing a bagging procedure leading to a series of estimates of population size. Using the median of this series, highly influential size estimates are avoided. In limited simulation studies it is shown that the procedure leads to estimates with remarkable small bias. (© 2008 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

11.
S M Snapinn  R D Small 《Biometrics》1986,42(3):583-592
Regression models of the type proposed by McCullagh (1980, Journal of the Royal Statistical Society, Series B 42, 109-142) are a general and powerful method of analyzing ordered categorical responses, assuming categorization of an (unknown) continuous response of a specified distribution type. Tests of significance with these models are generally based on likelihood-ratio statistics that have asymptotic chi 2 distributions; therefore, investigators with small data sets may be concerned with the small-sample behavior of these tests. In a Monte Carlo sampling study, significance tests based on the ordinal model are found to be powerful, but a modified test procedure (using an F distribution with a finite number of degrees of freedom for the denominator) is suggested such that the empirical significance level agrees more closely with the nominal significance level in small-sample situations. We also discuss the parallels between an ordinal regression model assuming underlying normality and conventional multiple regression. We illustrate the model with two data sets: one from a study investigating the relationship between phosphorus in soil and plant-available phosphorus in corn grown in that soil, and the other from a clinical trial comparing analgesic drugs.  相似文献   

12.
An adaptive response is a response to a stress such as radiation exposure that results in a lower than expected biological response. We describe an adaptive response to X radiation in mouse prostate using the pKZ1 chromosomal inversion assay. pKZ1 mice were treated with a priming dose of 0.001, 0.01, 1 or 10 mGy followed 4 h later by a 1000-mGy challenge dose. All priming doses caused a similar reduction in inversions compared to the 1000-mGy group, supporting the hypothesis that the adaptive response is the result of an on/off mechanism. The adaptive response was induced by a priming dose of 0.001 mGy, which is three orders of magnitude lower than has been reported previously. The adaptive responses completely protected against the inversions that would have been induced by a single 1000-mGy dose as well as against a proportion of spontaneous background inversions. The distribution of inversions across prostate gland cross sections after priming plus challenge irradiation suggested that adaptive responses were predominantly due to reduced low-dose radiation-induced inversions rather than to reduced high-dose radiation-induced inversions. This study used radiation doses relevant to human exposure.  相似文献   

13.
Summary In this article, we propose a Bayesian approach to dose–response assessment and the assessment of synergy between two combined agents. We consider the case of an in vitro ovarian cancer research study aimed at investigating the antiproliferative activities of four agents, alone and paired, in two human ovarian cancer cell lines. In this article, independent dose–response experiments were repeated three times. Each experiment included replicates at investigated dose levels including control (no drug). We have developed a Bayesian hierarchical nonlinear regression model that accounts for variability between experiments, variability within experiments (i.e., replicates), and variability in the observed responses of the controls. We use Markov chain Monte Carlo to fit the model to the data and carry out posterior inference on quantities of interest (e.g., median inhibitory concentration IC 50 ). In addition, we have developed a method, based on Loewe additivity, that allows one to assess the presence of synergy with honest accounting of uncertainty. Extensive simulation studies show that our proposed approach is more reliable in declaring synergy compared to current standard analyses such as the median‐effect principle/combination index method ( Chou and Talalay, 1984 , Advances in Enzyme Regulation 22, 27–55), which ignore important sources of variability and uncertainty.  相似文献   

14.
Species-occurrence data sets tend to contain a large proportion of zero values, i.e., absence values (zero-inflated). Statistical inference using such data sets is likely to be inefficient or lead to incorrect conclusions unless the data are treated carefully. In this study, we propose a new modeling method to overcome the problems caused by zero-inflated data sets that involves a regression model and a machine-learning technique. We combined a generalized liner model (GLM), which is widely used in ecology, and bootstrap aggregation (bagging), a machine-learning technique. We established distribution models of Vincetoxicum pycnostelma (a vascular plant) and Ninox scutulata (an owl), both of which are endangered and have zero-inflated distribution patterns, using our new method and traditional GLM and compared model performances. At the same time we modeled four theoretical data sets that contained different ratios of presence/absence values using new and traditional methods and also compared model performances. For distribution models, our new method showed good performance compared to traditional GLMs. After bagging, area under the curve (AUC) values were almost the same as with traditional methods, but sensitivity values were higher. Additionally, our new method showed high sensitivity values compared to the traditional GLM when modeling a theoretical data set containing a large proportion of zero values. These results indicate that our new method has high predictive ability with presence data when analyzing zero-inflated data sets. Generally, predicting presence data is more difficult than predicting absence data. Our new modeling method has potential for advancing species distribution modeling.  相似文献   

15.
Generalized linear mixed models (GLMMs) have become a frequently used tool for the analysis of non-Gaussian longitudinal data. Estimation is based on maximum likelihood theory, which assumes that the underlying probability model is correctly specified. Recent research is showing that the results obtained from these models are not always robust against departures from the assumptions on which these models are based. In the present work we have used simulations with a logistic random-intercept model to study the impact of misspecifying the random-effects distribution on the type I and II errors of the tests for the mean structure in GLMMs. We found that the misspecification can either increase or decrease the power of the tests, depending on the shape of the underlying random-effects distribution, and it can considerably inflate the type I error rate. Additionally, we have found a theoretical result which states that whenever a subset of fixed-effects parameters, not included in the random-effects structure equals zero, the corresponding maximum likelihood estimator will consistently estimate zero. This implies that under certain conditions a significant effect could be considered as a reliable result, even if the random-effects distribution is misspecified.  相似文献   

16.
In epidemiologic studies, measurement error in the exposure variable can have a detrimental effect on the power of hypothesis testing for detecting the impact of exposure in the development of a disease. To adjust for misclassification in the hypothesis testing procedure involving a misclassified binary exposure variable, we consider a retrospective case–control scenario under the assumption of nondifferential misclassification. We develop a test under Bayesian approach from a posterior distribution generated by a MCMC algorithm and a normal prior under realistic assumptions. We compared this test with an equivalent likelihood ratio test developed under the frequentist approach, using various simulated settings and in the presence or the absence of validation data. In our simulations, we considered varying degrees of sensitivity, specificity, sample sizes, exposure prevalence, and proportion of unvalidated and validated data. In these scenarios, our simulation study shows that the adjusted model (with-validation data model) is always better than the unadjusted model (without validation data model). However, we showed that exception is possible in the fixed budget scenario where collection of the validation data requires a much higher cost. We also showed that both Bayesian and frequentist hypothesis testing procedures reach the same conclusions for the scenarios under consideration. The Bayesian approach is, however, computationally more stable in rare exposure contexts. A real case–control study was used to show the application of the hypothesis testing procedures under consideration.  相似文献   

17.
Recently, three different models have been proposed to explain the distribution of abundances in natural communities: the self‐similarity model; the zero‐sum ecological drift model; and the occasional–frequent species model of Magurran and Henderson. Here we study patterns of relative abundance in a large community of forest Hymenoptera and show that it is indeed possible to divide the community into a group of frequent species and a group of occasional species. In accordance with the third model, frequent species followed a lognormal distribution. Relative abundances of the occasional species could be described by the self‐similarity model, but did not follow a log‐series as proposed by the occasional–frequent model. The zero‐sum ecological drift model makes no explicit predictions about frequent and occasional species but the abundance distributions of the hymenopteran species did not show the excess of rare species predicted by this model. Separate fits of this model to the frequent and to the occasional species were worse than the respective fits of the lognormal and the self‐similarity model.  相似文献   

18.
In this study we have developed a novel model of the deflection of primary cilia experiencing fluid flow accounting for phenomena not previously considered. Specifically, we developed a large rotation formulation that accounts for rotation at the base of the cilium, the initial shape of the cilium and fluid drag at high deflection angles. We utilised this model to analyse full 3D data-sets of primary cilia deflecting under fluid flow acquired with high-speed confocal microscopy. We found a wide variety of previously unreported bending shapes and behaviours. We also analysed post-flow relaxation patterns. Results from our combined experimental and theoretical approach suggest that the average flexural rigidity of primary cilia might be higher than previously reported (Schwartz et al. 1997, Am J Physiol. 272(1 Pt 2):F132–F138). In addition our findings indicate that the mechanics of primary cilia are richly varied and mechanisms may exist to alter their mechanical behaviour.  相似文献   

19.
Many biological networks respond to various inputs through a common signaling molecule that triggers distinct cellular outcomes. One potential mechanism for achieving specific input–output relationships is to trigger distinct dynamical patterns in response to different stimuli. Here we focused on the dynamics of p53, a tumor suppressor activated in response to cellular stress. We quantified the dynamics of p53 in individual cells in response to UV and observed a single pulse that increases in amplitude and duration in proportion to the UV dose. This graded response contrasts with the previously described series of fixed pulses in response to γ‐radiation. We further found that while γ‐triggered p53 pulses are excitable, the p53 response to UV is not excitable and depends on continuous signaling from the input‐sensing kinases. Using mathematical modeling and experiments, we identified feedback loops that contribute to specific features of the stimulus‐dependent dynamics of p53, including excitability and input‐duration dependency. Our study shows that different stresses elicit different temporal profiles of p53, suggesting that modulation of p53 dynamics might be used to achieve specificity in this network.  相似文献   

20.
Genetic differences between Northeast Asian (NEA) and Southeast Asian (SEA) populations have been observed in numerous studies. At the among-population level, despite a clear north–south differentiation observed for many genetic markers, debates were led between abrupt differences and a continuous pattern. At the within-population level, whether NEA or SEA populations have higher genetic diversity is also highly controversial. In this study, we analyzed a large set of HLA data from East Asia in order to map the genetic variation among and within populations in this continent and to clarify the distribution pattern of HLA lineages and alleles. We observed a genetic differentiation between NEA and SEA populations following a continuous pattern from north to south, and we show a significant and continuous decrease of HLA diversity by the same direction. This continuity is shaped by clinal distributions of many HLA lineages and alleles with increasing or decreasing frequencies along the latitude. These results bring new evidence in favor of the “overlapping model” proposed previously for East Asian peopling history, whereby modern humans migrated eastward from western Eurasia via two independent routes along each side of the Himalayas and, later, overlapped in East Asia across open land areas. Our study strongly suggests that intensive gene flow between NEA and SEA populations occurred and shaped the latitude-related continuous pattern of genetic variation and the peculiar HLA lineage and allele distributions observed in this continent. Probably for a very long period, the exact duration of these events remains to be estimated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号