首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
本文以蒙特卡罗模拟方法为基础,结合组织光学的光子传输模型,提出了一种新的图像分割算法,该算法将复杂的图像分割问题简化为大量简单的光子传输随机实验,通过分析传输规律来获取目标区域.在随后的实验中,结合细胞核提取这一问题建立了一个简单的光学传输模型,并依据此模型分别对人造图和实际图进行了分割.人造图的分割结果表明了该算法的可行性,说明了该算法的一些优点;而实际图的分割结果则反映了该算法的不足之处,文章针对其中存在的问题和算法待改进之处进行了分析.  相似文献   

2.
Substantial evidence exists from epidemiological and mechanistic studies supporting a sublinear or threshold dose–response relationship for the carcinogenicity of ingested arsenic; nonetheless, current regulatory agency evaluations have quantified arsenic risks using default, generic risk assessment procedures that assume a linear, no-threshold dose–response relationship. The resulting slope factors predict risks from U.S. background arsenic exposures that exceed certain regulatory levels of concern, an outcome that presents challenges for risk communication and risk management decisions. To better reflect the available scientific evidence, this article presents the results of a Margin of Exposure (MOE) analysis to characterize risks associated with typical and high-end background exposures of the U.S. population to arsenic from food, water, and soil. MOE values were calculated by comparing a no-observable-adverse-effect-level (NOAEL) derived from the epidemiological literature with exposure estimates generated using a probabilistic (Monte Carlo) model. The plausibility and conservative nature of the exposure and risk estimates evaluated in this analysis are supported by sensitivity and uncertainty analyses and by comparing predicted urinary arsenic concentrations with empirical data. Using the more scientifically supported MOE approach, the analysis presented in this article indicates that typical and high-end background exposures to inorganic arsenic in U.S. populations do not present elevated risks of carcinogenicity.  相似文献   

3.
Summary We provide methods that can be used to obtain more accurate environmental exposure assessment. In particular, we propose two modeling approaches to combine monitoring data at point level with numerical model output at grid cell level, yielding improved prediction of ambient exposure at point level. Extending our earlier downscaler model (Berrocal, V. J., Gelfand, A. E., and Holland, D. M. (2010b) . A spatio‐temporal downscaler for outputs from numerical models. Journal of Agricultural, Biological and Environmental Statistics 15, 176–197), these new models are intended to address two potential concerns with the model output. One recognizes that there may be useful information in the outputs for grid cells that are neighbors of the one in which the location lies. The second acknowledges potential spatial misalignment between a station and its putatively associated grid cell. The first model is a Gaussian Markov random field smoothed downscaler that relates monitoring station data and computer model output via the introduction of a latent Gaussian Markov random field linked to both sources of data. The second model is a smoothed downscaler with spatially varying random weights defined through a latent Gaussian process and an exponential kernel function, that yields, at each site, a new variable on which the monitoring station data is regressed with a spatial linear model. We applied both methods to daily ozone concentration data for the Eastern US during the summer months of June, July and August 2001, obtaining, respectively, a 5% and a 15% predictive gain in overall predictive mean square error over our earlier downscaler model ( Berrocal et al., 2010b ). Perhaps more importantly, the predictive gain is greater at hold‐out sites that are far from monitoring sites.  相似文献   

4.
Using probabilistic analysis may be very useful for risk management in developing countries, where information, resources, and technical expertise are often scarce. Currently, most regulatory agencies recommend using deterministic approaches for the analysis of problems relating to decision-making. However, this approach does not incorporate uncertainty in the variables, nor the propagation of uncertainty through the different processes in which they are involved. The complexity of the problem is therefore arbitrarily reduced, and valuable information that could be useful for proposing realistic policies is not considered. This article compares the results of a deterministic analysis with those of a probabilistic one for regulating arsenic in Chile, and differences are established for public policy as a result of building uncertainty into the analysis. It is concluded that the use of a deterministic approach can lead to higher risks than necessary and that probabilistic results can help the regulator negotiate stricter standards. Alternatively, the regulator may end up imposing much higher costs to sources than originally expected as these will be forced to use expensive technology to comply consistently with a given standard.  相似文献   

5.
This paper presents an extended threshold model for analyzing ordered categorical data. The model admits interactions between the position of the thresholds and the levels of the effective factors. These interactions are described according to the approach of Milliken and Graybill (1970). Especially important for practical application is the special assumption that there is a linear relation between interactions and thresholds, and that the slopes of the concerning regression lines may be different for samples. This means that the latent variables are distributed according to the same type of distributions, but may have different expectations and variances. Underlying this submodel, the estimation of parameters and the testing of hypotheses according to the maximum likelihood method is described. The procedure is illustrated by a numerical example, and an outline is given about a cluster analysis using model parameters.  相似文献   

6.
Reliable computational foot models offer an alternative means to enhance knowledge on the biomechanics of human foot.Model validation is one of the most critical aspects of the entire foot modeling and analysis process.This paper presents an invivo experiment combining motion capture system and plantar pressure measure platform to validate a three-dimensional finiteelement model of human foot.The Magnetic Resonance Imaging(MRI)slices for the foot modeling and the experimental datafor validation were both collected from the same volunteer subject.The validated components included the comparison of staticmodel predictions of plantar force,plantar pressure and foot surface deformation during six loading conditions,to equivalentmeasured data.During the whole experiment,foot surface deformation,plantar force and plantar pressure were recorded simultaneouslyduring six different loaded standing conditions.The predictions of the current FE model were in good agreementwith these experimental results.  相似文献   

7.
The possible threat posed by terrorists using chemical warfare agents (CWAs) against civilian targets is a major concern, reflecting the fact that CWAs are highly toxic to unprotected populations, with releases as vapors or aerosols likely to produce mass casualties on a highly localized basis within minutes or hours after an incident. A conceptual site model is developed and mixed model regression is used to estimate concentration values for the vesicant sulfur mustard (HD) based on the output from computational fluid dynamics (CFD) simulation following wind tunnel experimentation. The analysis provides a first-approximation of the spatial and temporal distribution of potential exposures within a set of 50 m × 50 m × 2 m grids across a 1000 m width by 300 m height by 2250 m length domain in a geographic information system (GIS) environment. The HD concentration values are calculated as log-averaged mean and the 95% confidence intervals for each grid at 1.9 d and 6.0 d after initial release. The technique offers a statistically valid means for rapidly generating unbiased first-approximations of concentration values subsequent to an initial release as an alternative to extensive monitoring or multiple runs of CFD models to parameterize potential exposure to HD spatially and temporally.  相似文献   

8.
Qualitative validation consists in showing that a model is able to mimic available observed data. In population level biological models, the available data frequently represent a group status, such as pool testing, rather than the individual statuses. They are aggregated. Our objective was to explore an approach for qualitative validation of a model with aggregated data and to apply it to validate a stochastic model simulating the bovine viral-diarrhoea virus (BVDV) spread within a dairy cattle herd. Repeated measures of the level of BVDV-specific antibodies in the bulk-tank milk (total milk production of a herd) were used to summarise the BVDV herd status. First, a domain of validation was defined to ensure a comparison restricted to dynamics of pathogen spread well identified among observed aggregated data (new herd infection with a wide BVDV spread). For simulations, scenarios were defined and simulation outputs at the individual animal level were aggregated at the herd level using an aggregation function. Comparison was done only for observed data and simulated aggregated outputs that were in the domain of validation. The validity of our BVDV model was not rejected. Drawbacks and ways of improvement of the approach are discussed.  相似文献   

9.
Observed blood lead levels for young children from several communities are compared with blood lead levels predicted for those communities using the USEPA's Integrated Exposure Uptake Biokinetic (IEUBK) Model. In contrast to the compari sons described elsewhere, the blood lead levels observed in the communities con sidered here are not well represented by the model's predictions. The model's predictions for Midvale, UT; Sandy, UT; Cincinnati, OH; and a recent data set for Palmerton, PA, show considerable deviation from observation both for the geometric mean blood lead level and the percent of blood lead levels above 10?µg/dL. Various adjustments in the model to consider play area soils, site specific geometric standard deviations and the time children spend away from their homes do not substantially improve the comparisons to observation. It is difficult to predict a priori the data sets for which the model will yield adequate predictions. This reduces the value of the model for use in communities where blood lead measurements have not been made, and suggests that caution should be exercised when using the model to set soil lead cleanup levels or to predict the result of remediation.  相似文献   

10.
Summary With advances in modern medicine and clinical diagnosis, case–control data with characterization of finer subtypes of cases are often available. In matched case–control studies, missingness in exposure values often leads to deletion of entire stratum, and thus entails a significant loss in information. When subtypes of cases are treated as categorical outcomes, the data are further stratified and deletion of observations becomes even more expensive in terms of precision of the category‐specific odds‐ratio parameters, especially using the multinomial logit model. The stereotype regression model for categorical responses lies intermediate between the proportional odds and the multinomial or baseline category logit model. The use of this class of models has been limited as the structure of the model implies certain inferential challenges with nonidentifiability and nonlinearity in the parameters. We illustrate how to handle missing data in matched case–control studies with finer disease subclassification within the cases under a stereotype regression model. We present both Monte Carlo based full Bayesian approach and expectation/conditional maximization algorithm for the estimation of model parameters in the presence of a completely general missingness mechanism. We illustrate our methods by using data from an ongoing matched case–control study of colorectal cancer. Simulation results are presented under various missing data mechanisms and departures from modeling assumptions.  相似文献   

11.
The underwater light field is described as a stochastic process, with the vertical attenuation as a random variable. It is shown that the vertical attenuation integral is well described as a Normal process with uncorrelated increments. The attenuation processes within a water body are found to be quite independent of the incoming irradiance. Thus, the relative light intensity at any depth can be approximated by a Lognormal random variable. Based upon the expectation of this Lognormal variable, and the mean value of the incident irradiance, the irradiance delivered at any depth can be estimated, as well as the statistical distribution of the irradiance. Light data from a number of Norwegian soft-water lakes showed good fit to the model. Two productive lakes included in this study also had a light regime well described by the statistical model. However, the model should be extended to cater for seasonal variations in these applications.  相似文献   

12.
The Poisson regression model for the analysis of life table and follow-up data with covariates is presented. An example is presented to show how this technique can be used to construct a parsimonious model which describes a set of survival data. All parameters in the model, the hazard and survival functions are estimated by maximum likelihood.  相似文献   

13.
Recently, there has been a great deal of interest in the analysis of multivariate survival data. In most epidemiological studies, survival times of the same cluster are related because of some unobserved risk factors such as the environmental or genetic factors. Therefore, modelling of dependence between events of correlated individuals is required to ensure a correct inference on the effects of treatments or covariates on the survival times. In the past decades, extension of proportional hazards model has been widely considered for modelling multivariate survival data by incorporating a random effect which acts multiplicatively on the hazard function. In this article, we consider the proportional odds model, which is an alternative to the proportional hazards model at which the hazard ratio between individuals converges to unity eventually. This is a reasonable property particularly when the treatment effect fades out gradually and the homogeneity of the population increases over time. The objective of this paper is to assess the influence of the random effect on the within‐subject correlation and the population heterogeneity. We are particularly interested in the properties of the proportional odds model with univariate random effect and correlated random effect. The correlations between survival times are derived explicitly for both choices of mixing distributions and are shown to be independent of the covariates. The time path of the odds function among the survivors are also examined to study the effect of the choice of mixing distribution. Modelling multivariate survival data using a univariate mixing distribution may be inadequate as the random effect not only characterises the dependence of the survival times, but also the conditional heterogeneity among the survivors. A robust estimate for the correlation of the logarithm of the survival times within a cluster is obtained disregarding the choice of the mixing distributions. The sensitivity of the estimate of the regression parameter under a misspecification of the mixing distribution is studied through simulation. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

14.
We test for universal patterns in cultural evolution by Guttman scaling on two different worldwide samples of archaeological traditions and on well-known archaeological sequences. The evidence is generally consistent with universal evolutionary sequences. We also present evidence for some punctuated evolutionary events.  相似文献   

15.
Summary Ye, Lin, and Taylor (2008, Biometrics 64 , 1238–1246) proposed a joint model for longitudinal measurements and time‐to‐event data in which the longitudinal measurements are modeled with a semiparametric mixed model to allow for the complex patterns in longitudinal biomarker data. They proposed a two‐stage regression calibration approach that is simpler to implement than a joint modeling approach. In the first stage of their approach, the mixed model is fit without regard to the time‐to‐event data. In the second stage, the posterior expectation of an individual's random effects from the mixed‐model are included as covariates in a Cox model. Although Ye et al. (2008) acknowledged that their regression calibration approach may cause a bias due to the problem of informative dropout and measurement error, they argued that the bias is small relative to alternative methods. In this article, we show that this bias may be substantial. We show how to alleviate much of this bias with an alternative regression calibration approach that can be applied for both discrete and continuous time‐to‐event data. Through simulations, the proposed approach is shown to have substantially less bias than the regression calibration approach proposed by Ye et al. (2008) . In agreement with the methodology proposed by Ye et al. (2008) , an advantage of our proposed approach over joint modeling is that it can be implemented with standard statistical software and does not require complex estimation techniques.  相似文献   

16.
Imaging mass spectrometry (IMS) has developed into a powerful tool allowing label-free detection of numerous biomolecules in situ. In contrast to shotgun proteomics, proteins/peptides can be detected directly from biological tissues and correlated to its morphology leading to a gain of crucial clinical information. However, direct identification of the detected molecules is currently challenging for MALDI–IMS, thereby compelling researchers to use complementary techniques and resource intensive experimental setups. Despite these strategies, sufficient information could not be extracted because of lack of an optimum data combination strategy/software. Here, we introduce a new open-source software ImShot that aims at identifying peptides obtained in MALDI–IMS. This is achieved by combining information from IMS and shotgun proteomics (LC–MS) measurements of serial sections of the same tissue. The software takes advantage of a two-group comparison to determine the search space of IMS masses after deisotoping the corresponding spectra. Ambiguity in annotations of IMS peptides is eliminated by introduction of a novel scoring system that identifies the most likely parent protein of a detected peptide in the corresponding IMS dataset. Thanks to its modular structure, the software can also handle LC–MS data separately and display interactive enrichment plots and enriched Gene Ontology terms or cellular pathways. The software has been built as a desktop application with a conveniently designed graphic user interface to provide users with a seamless experience in data analysis. ImShot can run on all the three major desktop operating systems and is freely available under Massachusetts Institute of Technology license.  相似文献   

17.
This paper describes how Cox's Proportional Hazards model may be used to analyze dichotomized factorial data obtained from a right-censored epidemiological study where time to response is of interest. Exact maximum likelihood estimates of the relative mortality rates are derived for any number of prognostic factors, but for the sake of simplicity, the mathematical details are presented for the case of two factors. This method is not based on the life table procedure. Kaplan-Meier estimates are obtained for the survival function of the internal control population, Which are in turn used to determine the expected number of deaths in the study population. The asymptotic (large sample) joint sampling distribution of the relative mortality rates is derived and some relevant simultaneous and conditional statistical tests are discussed. The relative mortality rates of several prognostic factors may be jointly considered as the multivariate extension of the familiar standard mortality ratio (SMR) of epidemiological studies. A numerical example is discussed to illustrate the method.  相似文献   

18.
The precise mechanisms of mercury accumulation and retention are still unclear. Generally, the association of mercury with selenium is used to explain these phenomena. It seems that the presence of coaccumulated endogenous Se can protect cells from the harmful effects of Hg. However, as speculated by some authors, this binding of Se to Hg can also result in a relative deficiency of biologically available Se needed for selenoenzyme syntheses. Deriving from the assumption that Hg deposited in tissues is bound to Se in a 1:1 ratio, the quantity of non-Hg bound Se could be calculated by the difference between the molar contents of the two elements (Semol–Hgmol). In this study we applied such an approach to the data from our previous investigation, where Hg and Se concentrations were determined in autopsy samples of mercury exposed retired Idrija mercury mine workers, Idrija residents living in a Hg contaminated environment and a control group with no known Hg exposure from the environment. Based on these data we tried to estimate the influence of Hg exposure on the physiologically available selenium content in selected tissues, particularly endocrine glands and brain tissues. Comparing the calculated values of (Semol– Hgmol) it was found that for Idrija residents the values were similar to those of the control group and as expected, diminished values were found in some mercury-loaded organs of retired Idrija miners. It could be speculated that in Idrija residents Hg sequestration of selenium is sufficiently compensated by increased Se levels, but that particularly in active miners and in some organs of retired miners, the activity and/or synthesis of selenoenzymes could be disturbed. Part of the study was presented at the 7th International Conference on Mercury as a Global Pollutant, June 27–July 2, 2004 Ljubljana, Slovenia (Falnoga et al. 2000)  相似文献   

19.
A multivariate probit model for correlated binary responses given the predictors of interest has been considered. Some of the responses are subject to classification errors and hence are not directly observable. Also measurements on some of the predictors are not available; instead the measurements on its surrogate are available. However, the conditional distribution of the unobservable predictors given the surrogate is completely specified. Models are proposed taking into account either or both of these sources of errors. Likelihood‐based methodologies are proposed to fit these models. To ascertain the effect of ignoring classification errors and /or measurement error on the estimates of the regression and correlation parameters, a sensitivity study is carried out through simulation. Finally, the proposed methodology is illustrated through an example.  相似文献   

20.
In this paper, a statistical model for clinical trials is presented for the special situation that a varying and unstructered number of binary responses is obtained from each subject. The assumptions of the model are the following: 1.) For each subject there is a (constant) individual Bernoulli parameter determining the distribution of the binary responses of this subject. 2.) The Bernoulli parameters associated with the subjects are realizations of independent random variables with distributions Pg in treatment group g(g = 1, 2, …, G). 3.) Given the value of the Bernoulli parameter, the observations are stochastically independent within each subject. Under these assumptions, a test statistic is derived to test the hypothesis H0:E(P1) = E(P2) = … = E(PG). It is proven and demonstrated by simulations, that the test statistic asymptotically (i.e. for a large number of subjects) follows the X2-distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号