首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
Recent Bayesian methods for the analysis of infectious disease outbreak data using stochastic epidemic models are reviewed. These methods rely on Markov chain Monte Carlo methods. Both temporal and non-temporal data are considered. The methods are illustrated with a number of examples featuring different models and datasets.  相似文献   

2.
3.
Essential to applying a mathematical model to a real-world application is calibrating the model to data. Methods for calibrating population models often become computationally infeasible when the population size (more generally the size of the state space) becomes large, or other complexities such as time-dependent transition rates, or sampling error, are present. Continuing previous work in this series on the use of diffusion approximations for efficient calibration of continuous-time Markov chains, I present efficient techniques for time-inhomogeneous chains and accounting for observation error. Observation error (partial observability) is accounted for by joint estimation using a scaled unscented Kalman filter for state-space models. The methodology will be illustrated with respect to models of disease dynamics incorporating seasonal transmission rate and in the presence of observation error, including application to two influenza outbreaks and measles in London in the pre-vaccination era.  相似文献   

4.
Despite temporally forced transmission driving many infectious diseases, analytical insight into its role when combined with stochastic disease processes and non-linear transmission has received little attention. During disease outbreaks, however, the absence of saturation effects early on in well-mixed populations mean that epidemic models may be linearised and we can calculate outbreak properties, including the effects of temporal forcing on fade-out, disease emergence and system dynamics, via analysis of the associated master equations. The approach is illustrated for the unforced and forced SIR and SEIR epidemic models. We demonstrate that in unforced models, initial conditions (and any uncertainty therein) play a stronger role in driving outbreak properties than the basic reproduction number R0, while the same properties are highly sensitive to small amplitude temporal forcing, particularly when R0 is small. Although illustrated for the SIR and SEIR models, the master equation framework may be applied to more realistic models, although analytical intractability scales rapidly with increasing system dimensionality. One application of these methods is obtaining a better understanding of the rate at which vector-borne and waterborne infectious diseases invade new regions given variability in environmental drivers, a particularly important question when addressing potential shifts in the global distribution and intensity of infectious diseases under climate change.  相似文献   

5.
An estimation of the immunity coverage needed to prevent future outbreaks of an infectious disease is considered for a community of households. Data on outbreak size in a sample of households from one epidemic are used to derive maximum likelihood estimates and confidence bounds for parameters of a stochastic model for disease transmission in a community of households. These parameter estimates induce estimates and confidence bounds for the basic reproduction number and the critical immunity coverage, which are the parameters of main interest when aiming at preventing major outbreaks in the future. The case when individuals are homogeneous, apart from the size of their household, is considered in detail. The generalization to the case with variable infectivity, susceptibility and/or mixing behaviour is discussed more briefly. The methods are illustrated with an application to data on influenza in Tecumseh, Michigan.  相似文献   

6.
Transmission events are the fundamental building blocks of the dynamics of any infectious disease. Much about the epidemiology of a disease can be learned when these individual transmission events are known or can be estimated. Such estimations are difficult and generally feasible only when detailed epidemiological data are available. The genealogy estimated from genetic sequences of sampled pathogens is another rich source of information on transmission history. Optimal inference of transmission events calls for the combination of genetic data and epidemiological data into one joint analysis. A key difficulty is that the transmission tree, which describes the transmission events between infected hosts, differs from the phylogenetic tree, which describes the ancestral relationships between pathogens sampled from these hosts. The trees differ both in timing of the internal nodes and in topology. These differences become more pronounced when a higher fraction of infected hosts is sampled. We show how the phylogenetic tree of sampled pathogens is related to the transmission tree of an outbreak of an infectious disease, by the within-host dynamics of pathogens. We provide a statistical framework to infer key epidemiological and mutational parameters by simultaneously estimating the phylogenetic tree and the transmission tree. We test the approach using simulations and illustrate its use on an outbreak of foot-and-mouth disease. The approach unifies existing methods in the emerging field of phylodynamics with transmission tree reconstruction methods that are used in infectious disease epidemiology.  相似文献   

7.
Mass vaccination programmes aim to maintain the effective reproduction number R of an infection below unity. We describe methods for monitoring the value of R using surveillance data. The models are based on branching processes in which R is identified with the offspring mean. We derive unconditional likelihoods for the offspring mean using data on outbreak size and outbreak duration. We also discuss Bayesian methods, implemented by Metropolis-Hastings sampling. We investigate by simulation the validity of the models with respect to depletion of susceptibles and under-ascertainment of cases. The methods are illustrated using surveillance data on measles in the USA.  相似文献   

8.
The method for virus titer determination of avian infectious bursal disease (IBD) live vaccine, developed long before regulatory validation guidelines is a cell culture based biological assay intended for use in vaccine release testing.The aim of our study was to perform a validation, based on fit-for-purpose principle, of an old 50% tissue culture infectious dose (TCID50) method according to Guidelines of the International Cooperation on Harmonization of Technical Requirements for Registration of Veterinary Medicinal Products (VICH).This paper addresses challenges and discusses some key aspects that should be considered when validating biological methods. A different statistical approach and non-parametric statistics was introduced in validation protocol in order to derive useful information from experimental data. This approach is applicable for a wide range of methods.In conclusion, the previous virus titration method had showed to be precise, accurate, linear, robust and in accordance with current regulatory standards, which indicates that there is no need for additional re-development or upgrades of the method for its suitability for intended use.  相似文献   

9.
Yang HC  Pan CC  Lu RC  Fann CS 《Genetics》2005,169(1):399-410
In the post-genome era, disease gene mapping using dense genetic markers has become an important tool for dissecting complex inheritable diseases. Locating disease susceptibility genes using DNA-pooling experiments is a potentially economical alternative to those involving individual genotyping. The foundation of a successful DNA-pooling association test is a precise and accurate estimation of allele frequency. In this article, we propose two new adjustment methods that correct for preferential amplification of nucleotides when estimating the allele frequency of single-nucleotide polymorphisms. We also discuss the effect of sample size when calibrating unequal allelic amplification. We conducted simulation studies to assess the performance of different adjustment procedures and found that our proposed adjustments are more reliable with respect to the estimation bias and root mean square error compared with the current approach. The improved performance not only improves the accuracy and precision of allele frequency estimations but also leads to more powerful disease gene mapping.  相似文献   

10.
11.
We develop a Bayesian approach to sample size computations for surveys designed to provide evidence of freedom from a disease or from an infectious agent. A population is considered "disease-free" when the prevalence or probability of disease is less than some threshold value. Prior distributions are specified for diagnostic test sensitivity and specificity and we test the null hypothesis that the prevalence is below the threshold. Sample size computations are developed using hypergeometric sampling for finite populations and binomial sampling for infinite populations. A normal approximation is also developed. Our procedures are compared with the frequentist methods of Cameron and Baldock (1998a, Preventive Veterinary Medicine34, 1-17.) using an example of foot-and-mouth disease. User-friendly programs for sample size calculation and analysis of survey data are available at http://www.epi.ucdavis.edu/diagnostictests/.  相似文献   

12.
A two-component model for counts of infectious diseases   总被引:1,自引:0,他引:1  
We propose a stochastic model for the analysis of time series of disease counts as collected in typical surveillance systems on notifiable infectious diseases. The model is based on a Poisson or negative binomial observation model with two components: a parameter-driven component relates the disease incidence to latent parameters describing endemic seasonal patterns, which are typical for infectious disease surveillance data. An observation-driven or epidemic component is modeled with an autoregression on the number of cases at the previous time points. The autoregressive parameter is allowed to change over time according to a Bayesian changepoint model with unknown number of changepoints. Parameter estimates are obtained through the Bayesian model averaging using Markov chain Monte Carlo techniques. We illustrate our approach through analysis of simulated data and real notification data obtained from the German infectious disease surveillance system, administered by the Robert Koch Institute in Berlin. Software to fit the proposed model can be obtained from http://www.statistik.lmu.de/ approximately mhofmann/twins.  相似文献   

13.
The primary aim of this review was to evaluate the state of knowledge of the geographical distribution of all infectious diseases of clinical significance to humans. A systematic review was conducted to enumerate cartographic progress, with respect to the data available for mapping and the methods currently applied. The results helped define the minimum information requirements for mapping infectious disease occurrence, and a quantitative framework for assessing the mapping opportunities for all infectious diseases. This revealed that of 355 infectious diseases identified, 174 (49%) have a strong rationale for mapping and of these only 7 (4%) had been comprehensively mapped. A variety of ambitions, such as the quantification of the global burden of infectious disease, international biosurveillance, assessing the likelihood of infectious disease outbreaks and exploring the propensity for infectious disease evolution and emergence, are limited by these omissions. An overview of the factors hindering progress in disease cartography is provided. It is argued that rapid improvement in the landscape of infectious diseases mapping can be made by embracing non-conventional data sources, automation of geo-positioning and mapping procedures enabled by machine learning and information technology, respectively, in addition to harnessing labour of the volunteer ‘cognitive surplus’ through crowdsourcing.  相似文献   

14.
The availability of epidemiological data in the early stages of an outbreak of an infectious disease is vital for modelers to make accurate predictions regarding the likely spread of disease and preferred intervention strategies. However, in some countries, the necessary demographic data are only available at an aggregate scale. We investigated the ability of models of livestock infectious diseases to predict epidemic spread and obtain optimal control policies in the event of imperfect, aggregated data. Taking a geographic information approach, we used land cover data to predict UK farm locations and investigated the influence of using these synthetic location data sets upon epidemiological predictions in the event of an outbreak of foot-and-mouth disease. When broadly classified land cover data were used to create synthetic farm locations, model predictions deviated significantly from those simulated on true data. However, when more resolved subclass land use data were used, moderate to highly accurate predictions of epidemic size, duration and optimal vaccination and ring culling strategies were obtained. This suggests that a geographic information approach may be useful where individual farm-level data are not available, to allow predictive analyses to be carried out regarding the likely spread of disease. This method can also be used for contingency planning in collaboration with policy makers to determine preferred control strategies in the event of a future outbreak of infectious disease in livestock.  相似文献   

15.
In infectious disease epidemiology, statistical methods are an indispensable component for the automated detection of outbreaks in routinely collected surveillance data. So far, methodology in this area has been largely of frequentist nature and has increasingly been taking inspiration from statistical process control. The present work is concerned with strengthening Bayesian thinking in this field. We extend the widely used approach of Farrington et al. and Heisterkamp et al. to a modern Bayesian framework within a time series decomposition context. This approach facilitates a direct calculation of the decision‐making threshold while taking all sources of uncertainty in both prediction and estimation into account. More importantly, with the methodology it is now also possible to integrate covariate processes, e.g. weather influence, into the outbreak detection. Model inference is performed using fast and efficient integrated nested Laplace approximations, enabling the use of this method in routine surveillance at public health institutions. Performance of the algorithm was investigated by comparing simulations with existing methods as well as by analysing the time series of notified campylobacteriosis cases in Germany for the years 2002–2011, which include absolute humidity as a covariate process. Altogether, a flexible and modern surveillance algorithm is presented with an implementation available through the R package ‘surveillance’.  相似文献   

16.
Phylodynamics - the field aiming to quantitatively integrate the ecological and evolutionary dynamics of rapidly evolving populations like those of RNA viruses - increasingly relies upon coalescent approaches to infer past population dynamics from reconstructed genealogies. As sequence data have become more abundant, these approaches are beginning to be used on populations undergoing rapid and rather complex dynamics. In such cases, the simple demographic models that current phylodynamic methods employ can be limiting. First, these models are not ideal for yielding biological insight into the processes that drive the dynamics of the populations of interest. Second, these models differ in form from mechanistic and often stochastic population dynamic models that are currently widely used when fitting models to time series data. As such, their use does not allow for both genealogical data and time series data to be considered in tandem when conducting inference. Here, we present a flexible statistical framework for phylodynamic inference that goes beyond these current limitations. The framework we present employs a recently developed method known as particle MCMC to fit stochastic, nonlinear mechanistic models for complex population dynamics to gene genealogies and time series data in a Bayesian framework. We demonstrate our approach using a nonlinear Susceptible-Infected-Recovered (SIR) model for the transmission dynamics of an infectious disease and show through simulations that it provides accurate estimates of past disease dynamics and key epidemiological parameters from genealogies with or without accompanying time series data.  相似文献   

17.
A comparison of calibration methods for stereo fluoroscopic imaging systems   总被引:1,自引:0,他引:1  
Stereo (biplane) fluoroscopic imaging systems are considered the most accurate and precise systems to study joint kinematics in vivo. Calibration of a biplane fluoroscopy system consists of three steps: (1) correction for spatial image distortion; (2) calculation of the focus position; and (3) calculation of the relative position and orientation of the two fluoroscopy systems with respect to each other. In this study we compared 6 methods for calibrating a biplane fluoroscopy system including a new method using a novel nested-optimization technique. To quantify bias and precision, an electronic digital caliper instrumented with two tantalum markers on radiolucent posts was imaged in three configurations, and for each configuration placed in ten static poses distributed throughout the viewing volume. Bias and precision were calculated as the mean and standard deviation of the displacement of the markers measured between the three caliper configurations. The data demonstrated that it is essential to correct for image distortion when sub-millimeter accuracy is required. We recommend calibrating a stereo fluoroscopic imaging system using an accurately machined plate and a calibration cube, which improved accuracy 2-3 times compared to the other calibration methods. Once image distortion is properly corrected, the focus position should be determined using the Direct Linear Transformation (DLT) method for its increased speed and equivalent accuracy compared to the novel nested-optimization method. The DLT method also automatically provides the 3D fluoroscopy configuration. Using the recommended calibration methodology, bias and precision of 0.09 and 0.05 mm or better can be expected for measuring inter-marker distances.  相似文献   

18.
The overrepresented approach (ORA) is the most widely-accepted method for functional analysis of microarray datasets. The ORA is computationally-efficient and robust; however, it suffers from the inability of comparing results from multiple gene lists particularly with time-course experiments or those involving multiple treatments. To overcome such limitation a novel method termed Dynamic Impact Approach (DIA) is proposed. The DIA provides an estimate of the biological impact of the experimental conditions and the direction of the impact. The impact is obtained by combining the proportion of differentially expressed genes (DEG) with the log2 mean fold change and mean -log P-value of genes associated with the biological term. The direction of the impact is calculated as the difference of the impact of up-regulated DEG and down-regulated DEG associated with the biological term. The DIA was validated using microarray data from a time-course experiment of bovine mammary gland across the lactation cycle. Several annotation databases were analyzed with DIA and compared to the same analysis performed by the ORA. The DIA highlighted that during lactation both BTA6 and BTA14 were the most impacted chromosomes; among Uniprot tissues those related with lactating mammary gland were the most positively-impacted; within KEGG pathways 'Galactose metabolism' and several metabolism categories related to lipid synthesis were among the most impacted and induced; within Gene Ontology "lactose biosynthesis" among Biological processes and "Lactose synthase activity" and "Stearoyl-CoA 9-desaturase activity" among Molecular processes were the most impacted and induced. With the exception of the terms 'Milk', 'Milk protein' and 'Mammary gland' among Uniprot tissues and SP_PIR_Keyword, the use of ORA failed to capture as significantly-enriched (i.e., biologically relevant) any term known to be associated with lactating mammary gland. Results indicate the DIA is a biologically-sound approach for analysis of time-course experiments. This tool represents an alternative to ORA for functional analysis.  相似文献   

19.
To test for association between a disease and a set of linked markers, or to estimate relative risks of disease, several different methods have been developed. Many methods for family data require that individuals be genotyped at the full set of markers and that phase can be reconstructed. Individuals with missing data are excluded from the analysis. This can result in an important decrease in sample size and a loss of information. A possible solution to this problem is to use missing-data likelihood methods. We propose an alternative approach, namely the use of multiple imputation. Briefly, this method consists in estimating from the available data all possible phased genotypes and their respective posterior probabilities. These posterior probabilities are then used to generate replicate imputed data sets via a data augmentation algorithm. We performed simulations to test the efficiency of this approach for case/parent trio data and we found that the multiple imputation procedure generally gave unbiased parameter estimates with correct type 1 error and confidence interval coverage. Multiple imputation had some advantages over missing data likelihood methods with regards to ease of use and model flexibility. Multiple imputation methods represent promising tools in the search for disease susceptibility variants.  相似文献   

20.
The evolution of “informatics” technologies has the potential to generate massive databases, but the extent to which personalized medicine may be effectuated depends on the extent to which these rich databases may be utilized to advance understanding of the disease molecular profiles and ultimately integrated for treatment selection, necessitating robust methodology for dimension reduction. Yet, statistical methods proposed to address challenges arising with the high‐dimensionality of omics‐type data predominately rely on linear models and emphasize associations deriving from prognostic biomarkers. Existing methods are often limited for discovering predictive biomarkers that interact with treatment and fail to elucidate the predictive power of their resultant selection rules. In this article, we present a Bayesian predictive method for personalized treatment selection that is devised to integrate both the treatment predictive and disease prognostic characteristics of a particular patient's disease. The method appropriately characterizes the structural constraints inherent to prognostic and predictive biomarkers, and hence properly utilizes these complementary sources of information for treatment selection. The methodology is illustrated through a case study of lower grade glioma. Theoretical considerations are explored to demonstrate the manner in which treatment selection is impacted by prognostic features. Additionally, simulations based on an actual leukemia study are provided to ascertain the method's performance with respect to selection rules derived from competing methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号