首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bowman et al. used epidemiologic data to test a model in which subjects were classified as being "in-resonance" or "not-in-resonance" for 60-Hz magnetic-field exposures depending on single static magnetic-field measurements at the centers of their bedrooms. A second paper by Swanson concluded that a single static magnetic-field measurement is insufficient to meaningfully characterize a residential environment. The main objective of this study was to investigate exposure-related questions raised by these two papers in two U.S. data sets, one containing single spot measurements of static magnetic fields at two locations in homes located in eight states, and the other repeated spot measurements (seven times during the course of one year) of the static magnetic fields at the centers of bedrooms and family rooms and on the surfaces of beds in 51 single-family homes in two metropolitan areas. Using Bowman's criterion, bedrooms were first classified as being in-resonance or not-in-resonance based on the average of repeated measurements of the static magnetic field measured on the bed where the presumed important exposure actually occurred. Bedrooms were then classified a second time using single spot measurements taken at the centers of bedrooms, centers of family rooms, or on the surfaces of beds, as would be done in the typical epidemiologic study. The kappa statistics characterizing the degree of concordance between the first (on-bed averages) and second (spot measurements) methods of assessing resonance status were 0.44, 0.33, and 0.67, respectively. This level of misclassification could significantly affect the results of studies involving the determination of resonance status.  相似文献   

2.
The focus of this paper is a general relationship proposed by May (Amer. Natur. 107 (1973)) between the stability properties of stochastic models incorporating environmental variation and the stability properties of the deterministic models from which they are derived. The concepts of stochastic stability underlying this conjectured relationship are discussed and compared to the standard definitions of deterministic stability as well as alternative criteria for stability in stochastic models. It is shown by example that May's qualitative stability criterion does not ensure stability in any sense unless restrictive conditions on the form of the model are satisfied. Even under these conditions, the criterion, which is based on linearization, generally provides information only about the local dynamics of multispecies models. The applicability of such information to stochastic limiting similarity theory is discussed and alternative methods of analysis are proposed.  相似文献   

3.
Statistical criterion for evaluation of individual bioequivalence (IBE) between generic and innovative products often involves a function of the second moments of normal distributions. Under replicated crossover designs, the aggregate criterion for IBE proposed by the guidance of the U.S. Food and Drug Administration (FDA) contains the squared mean difference, variance of subject-by-formulation interaction, and the difference in within-subject variances between the generic and innovative products. The upper confidence bound for the linearized form of the criterion derived by the modified large sample (MLS) method is proposed in the 2001 U.S. FDA guidance as a testing procedure for evaluation of IBE. Due to the complexity of the power function for the criterion based on the second moments, literature on sample size determination for the inference of IBE is scarce. Under the two-sequence and four-period crossover design, we derive the asymptotic distribution of the upper confidence bound of the linearized criterion. Hence the asymptotic power can be derived for sample size determination for evaluation of IBE. Results of numerical studies are reported. Discussion of sample size determination for evaluation of IBE based on the aggregate criterion of the second moments in practical applications is provided.  相似文献   

4.
GREENLAND and MICKEY (1988) derived a closed-form collapsibility test and confidence interval for IxJxK contingency tables with qualitative factors, and presented a small simulation study of its performance. We show how their method can be extended to regression models linear in the natural parameter of a one-parameter exponential family, in which the parameter of interest is the difference of “crude” and “adjusted” regression coefficients. A simplification of the method yields a generalization of the test for omitted covariates given by HAUSMAN (1978) for ordinary linear regression. We present an application to a study of coffee use and myocardial infarction, and a simulation study which indicates that the simplified test performs adequately in typical epidemiologic settings.  相似文献   

5.
Neuroeconomic conditions for "rational addiction" (Becker & Murphy 1988) have been unknown. This paper derived the conditions for "rational addiction" by utilizing a nonlinear time-perception theory of "hyperbolic" discounting, which is mathematically equivalent to the q-exponential intertemporal choice model based on Tsallis' statistics. It is shown that (i) Arrow-Pratt measure for temporal cognition corresponds to the degree of irrationality (i.e., Prelec's "decreasing impatience" parameter of temporal discounting) and (ii) rationality in addicts is controlled by a nondimensionalization parameter of the logarithmic time-perception function. Furthermore, the present theory illustrates the possibility that addictive drugs increase impulsivity via dopaminergic neuroadaptation without increasing irrationality. Future directions in the application of the model to studies in neuroeconomics are discussed.  相似文献   

6.
MOTIVATION: Accurate subcategorization of tumour types through gene-expression profiling requires analytical techniques that estimate the number of categories or clusters rigorously and reliably. Parametric mixture modelling provides a natural setting to address this problem. RESULTS: We compare a criterion for model selection that is derived from a variational Bayesian framework with a popular alternative based on the Bayesian information criterion. Using simulated data, we show that the variational Bayesian method is more accurate in finding the true number of clusters in situations that are relevant to current and future microarray studies. We also compare the two criteria using freely available tumour microarray datasets and show that the variational Bayesian method is more sensitive to capturing biologically relevant structure.  相似文献   

7.
J Benichou  M H Gail 《Biometrics》1990,46(4):991-1003
The attributable risk (AR), defined as AR = [Pr(disease) - Pr(disease/no exposure)]/Pr(disease), measures the proportion of disease risk that is attributable to an exposure. Recently Bruzzi et al. (1985, American Journal of Epidemiology 122, 904-914) presented point estimates of AR based on logistic models for case-control data to allow for confounding factors and secondary exposures. To produce confidence intervals, we derived variance estimates for AR under the logistic model and for various designs for sampling controls. Calculations for discrete exposure and confounding factors require covariances between estimates of the risk parameters of the logistic model and the proportions of cases with given levels of exposure and confounding factors. These covariances are estimated from Taylor series expansions applied to implicit functions. Similar calculations for continuous exposures are derived using influence functions. Simulations indicate that those asymptotic procedures yield reliable variance estimates and confidence intervals with near nominal coverage. An example illustrates the usefulness of variance calculations in selecting a logistic model that is neither so simplified as to exhibit systematic lack of fit nor so complicated as to inflate the variance of the estimate of AR.  相似文献   

8.
In non-randomized studies, the assessment of a causal effect of treatment or exposure on outcome is hampered by possible confounding. Applying multiple regression models including the effects of treatment and covariates on outcome is the well-known classical approach to adjust for confounding. In recent years other approaches have been promoted. One of them is based on the propensity score and considers the effect of possible confounders on treatment as a relevant criterion for adjustment. Another proposal is based on using an instrumental variable. Here inference relies on a factor, the instrument, which affects treatment but is thought to be otherwise unrelated to outcome, so that it mimics randomization. Each of these approaches can basically be interpreted as a simple reweighting scheme, designed to address confounding. The procedures will be compared with respect to their fundamental properties, namely, which bias they aim to eliminate, which effect they aim to estimate, and which parameter is modelled. We will expand our overview of methods for analysis of non-randomized studies to methods for analysis of randomized controlled trials and show that analyses of both study types may target different effects and different parameters. The considerations will be illustrated using a breast cancer study with a so-called Comprehensive Cohort Study design, including a randomized controlled trial and a non-randomized study in the same patient population as sub-cohorts. This design offers ideal opportunities to discuss and illustrate the properties of the different approaches.  相似文献   

9.
The choice of the referent entity in comparative epidemiologic studies is, of course, crucial to the attainment of a valid and sharp contrast. A commonplace procedure in large sample significance testing for (multiple) fourfold (and extended) tables is to base the estimation of the expected value (and/or the variance) on the combined study and comparison experience. This procedure is, however, deplorable in that in interim stages of the study it may hide the underlying difference in rates. The notion that the probability of the realization of the observed number of exposed cases be evaluated unconditionally assuming a binomial model with the Bernoulli parameter estimated solely by the rate in the referent series is considered. This outlook is carried over to bilateral testing. A chi square criterion accommodating this viewpoint could be used to canvass the accrued data for deciding on the prospects of continuing the data collection. A test employing exact variance estimates is derived and various methods for computing an ‘exact’ significance probability are developed. The procedure is exemplified by artificial data and accompanied by a discussion of its applicability.  相似文献   

10.
The epidemiologic concept of the adjusted attributable risk is a useful approach to quantitatively describe the importance of risk factors on the population level. It measures the proportional reduction in disease probability when a risk factor is eliminated from the population, accounting for effects of confounding and effect-modification by nuisance variables. The computation of asymptotic variance estimates for estimates of the adjusted attributable risk is often done by applying the delta method. Investigations on the delta method have shown, however, that the delta method generally tends to underestimate the standard error, leading to biased confidence intervals. We compare confidence intervals for the adjusted attributable risk derived by applying computer intensive methods like the bootstrap or jackknife to confidence intervals based on asymptotic variance estimates using an extensive Monte Carlo simulation and within a real data example from a cohort study in cardiovascular disease epidemiology. Our results show that confidence intervals based on bootstrap and jackknife methods outperform intervals based on asymptotic theory. Best variants of computer intensive confidence intervals are indicated for different situations.  相似文献   

11.
Overwhelming evidence indicates that environmental exposures, broadly defined, are responsible for most cancer. There is reason to believe, however, that relatively common polymorphisms in a wide spectrum of genes may modify the effect of these exposures. We discuss the rationale for using common polymorphisms to enhance our understanding of how environmental exposures cause cancer and comment on epidemiologic strategies to assess these effects, including study design, genetic and statistical analysis, and sample size requirements. Special attention is given to sources of potential bias in population studies of gene--environment interactions, including exposure and genotype misclassification and population stratification (i.e., confounding by ethnicity). Nevertheless, by merging epidemiologic and molecular approaches in the twenty-first century, there will be enormous opportunities for unraveling the environmental determinants of cancer. In particular, studies of genetically susceptible subgroups may enable the detection of low levels of risk due to certain common exposures that have eluded traditional epidemiologic methods. Further, by identifying susceptibility genes and their pathways of action, it may be possible to identify previously unsuspected carcinogens. Finally, by gaining a more comprehensive understanding of environmental and genetic risk factors, there should emerge new clinical and public health strategies aimed at preventing and controlling cancer.  相似文献   

12.
The purpose of this study was to evaluate a Hill-based mathematical model of muscle energetics and to disclose inconsistencies in existing experimental data. For this purpose, we simulated iso-velocity contractions of mouse fast twitch EDL and slow twitch SOL fibers, and we compared the outcome to experimental results. The experimental results were extracted from two studies published in the literature, which were based on the same methodology but yielded different outcomes (B96 and B93). In the model, energy cost was modeled as the sum of heat and work. Parameters used to model heat rate were entirely independent of the experimental data-sets. Parameters describing the mechanical behavior were derived from both experimental studies. The model was found to accurately predict the muscle energetics and mechanical efficiency of data-set B96. The model could not, however, replicate the energetics and efficiency of SOL and EDL that were found in data-set B93. The model overestimated the shortening heat rate of EDL but, surprisingly, also the mechanical work rate for both muscles. This was surprising since mechanical characteristics of the model were derived directly from the experimental data. It was demonstrated that the inconsistencies in data-set B93 must have been due to some unexplained confounding artifact. It was concluded that the presented model of muscle energetics is valid for iso-velocity contractions of mammalian muscle since it accurately predicts experimental results of an independent data-set (B96). In addition, the model appeared to be helpful in revealing inconsistencies in a second data-set (B93).  相似文献   

13.
The stability of joint endoprostheses depends on the loading conditions to which the implant-bone complex is exposed. Due to a lack of appropriate muscle force data, less complex loading conditions tend to be considered in vitro. The goal of this study was to develop a load profile that better simulates the in vivo loading conditions of a "typical" total hip replacement patient and considers the interdependence of muscle and joint forces. The development of the load profile was based on a computer model of the lower extremities that has been validated against in vivo data. This model was simplified by grouping functionally similar hip muscles. Muscle and joint contact forces were computed for an average data set of up to four patients throughout walking and stair climbing. The calculated hip contact forces were compared to the average of the in vivo measured forces. The final derived load profile included the forces of up to four muscles at the instances of maximum in vivo hip joint loading during both walking and stair climbing. The hip contact forces differed by less than 10% from the peak in vivo value for a "typical" patient. The derived load profile presented here is the first that is based on validated musculoskeletal analyses and seems achievable in an in vitro test set-up. It should therefore form the basis for further standardisation of pre-clinical testing by providing a more realistic approximation of physiological loading conditions.  相似文献   

14.
Epidemiology is defined as the study of the distribution and determinants of disease within populations. In addition to the requirements for disease surveillance, epidemiologic methods have numerous applications in laboratory animal science and can reveal important insights into the multifactoral mechanisms of disease, thereby aiding in the design of optimized intervention strategies. Observational approaches to data collection can be used to quantify the role of causal factors under natural circumstances, complementing the value of experimental studies in this field. The meaning and appropriate use of standard measures of disease frequency and exposure-disease relationships are reviewed, along with explanations of bias and confounding. Recommendations for reporting the methods and findings from this type of work in comparative medicine literature are presented. Aspects of model-based approaches to data analysis are introduced, offering further opportunities for gaining needed information from epidemiologic study of problems in laboratory animal medicine and management.  相似文献   

15.

Objective

Recent studies have shown the relevance of the cerebral grey matter involvement in multiple sclerosis (MS). The number of new cortical lesions (CLs), detected by specific MRI sequences, has the potential to become a new research outcome in longitudinal MS studies. Aim of this study is to define the statistical model better describing the distribution of new CLs developed over 12 and 24 months in patients with relapsing-remitting (RR) MS.

Methods

Four different models were tested (the Poisson, the Negative Binomial, the zero-inflated Poisson and the zero-inflated Negative Binomial) on a group of 191 RRMS patients untreated or treated with 3 different disease modifying therapies. Sample size for clinical trials based on this new outcome measure were estimated by a bootstrap resampling technique.

Results

The zero-inflated Poisson model gave the best fit, according to the Akaike criterion to the observed distribution of new CLs developed over 12 and 24 months both in each treatment group and in the whole RRMS patients group adjusting for treatment effect.

Conclusions

The sample size calculations based on the zero-inflated Poisson model indicate that randomized clinical trials using this new MRI marker as an outcome are feasible.  相似文献   

16.
Tea and tea compounds have been shown to inhibit carcinogenic processes in experimental animals, raising the possibility that tea drinking may lower cancer risk in humans. However, epidemiologic studies have produced inconsistent evidence on the relation between tea drinking and cancer risk. Ecological data show considerable international variation in tea consumption but relatively small differences in cancer rates. Results from case-control and cohort studies also are inconclusive. Nevertheless, high consumption of tea has been linked to a reduced risk of digestive tract cancers in a number of epidemiologic studies. A lack of detailed information on duration and amount of tea drinking, a narrow range of tea intake in some study populations, inadequate control for confounding, and potential biases in recall and reporting of tea drinking patterns in case-control studies may have contributed to the diverse findings. Further research is needed before definitive conclusions on tea's impact upon cancer risk in humans can be reached.  相似文献   

17.
Summary Restricted mean lifetime is often of direct interest in epidemiologic studies involving censored survival times. Differences in this quantity can be used as a basis for comparing several groups. For example, transplant surgeons, nephrologists, and of course patients are interested in comparing posttransplant lifetimes among various types of kidney transplants to assist in clinical decision making. As the factor of interest is not randomized, covariate adjustment is needed to account for imbalances in confounding factors. In this report, we use semiparametric theory to develop an estimator for differences in restricted mean lifetimes although accounting for confounding factors. The proposed method involves building working models for the time‐to‐event and coarsening mechanism (i.e., group assignment and censoring). We show that the proposed estimator possesses the double robust property; i.e., when either the time‐to‐event or coarsening process is modeled correctly, the estimator is consistent and asymptotically normal. Simulation studies are conducted to assess its finite‐sample performance and the method is applied to national kidney transplant data.  相似文献   

18.
A parametric nonorthogonal tight-binding model (NTBM1) with the set of parameters for H–C–N–O systems is presented. This model compares well with widely used semi-empirical AM1 and PM3/PM7 models but contains less fitting parameters per atom. All NTBM1 parameters are derived based on a criterion of the best agreement between the calculated and experimental values of bond lengths, valence angles and binding energies for various H–C–N–O molecules. Results for more than 200 chemical compounds are reported. Parameters are currently available for hydrogen, carbon, nitrogen, oxygen atoms and corresponding interatomic interactions. The model has a good transferability and can be used for both relaxation of large molecular systems (e.g., high-molecular compounds or covalent cluster complexes) and long-timescale molecular dynamics simulation (e.g., modelling of thermal decomposition processes). The program package based on this model is available for download at no cost from http://ntbm.info.  相似文献   

19.
EMG-driven musculoskeletal modeling is a method in which loading on the active and passive structures of the cervical spine may be investigated. A model of the cervical spine exists; however, it has yet to be criterion validated. Furthermore, neck muscle morphometry in this model was derived from elderly cadavers, threatening model validity. Therefore, the overall aim of this study was to modify and criterion validate this preexisting graphically based musculoskeletal model of the cervical spine. Five male subjects with no neck pain participated in this study. The study consisted of three parts. First, subject-specific neck muscle morphometry data were derived by using magnetic resonance imaging. Second, EMG drive for the model was generated from both surface (Drive 1: N=5) and surface and deep muscles (Drive 2: N=3). Finally, to criterion validate the modified model, net moments predicted by the model were compared against net moments measured by an isokinetic dynamometer in both maximal and submaximal isometric contractions with the head in the neutral posture, 20 deg of flexion, and 35 deg of extension. Neck muscle physiological cross sectional area values were greater in this study when compared to previously reported data. Predictions of neck torque by the model were better in flexion (18.2% coefficient of variation (CV)) when compared to extension (28.5% CV) and using indwelling EMG did not enhance model predictions. There were, however, large variations in predictions when all the contractions were compared. It is our belief that further work needs to be done to improve the validity of the modified EMG-driven neck model examined in this study. A number of factors could potentially improve the model with the most promising probably being optimizing various modeling parameters by using methods established by previous researchers investigating other joints of the body.  相似文献   

20.
Wermuth  Nanny; Cox  D. R. 《Biometrika》2008,95(1):17-33
Undetected confounding may severely distort the effect of anexplanatory variable on a response variable, as defined by astepwise data-generating process. The best known type of distortion,which we call direct confounding, arises from an unobservedexplanatory variable common to a response and its main explanatoryvariable of interest. It is relevant mainly for observationalstudies, since it is avoided by successful randomization. Bycontrast, indirect confounding, which we identify in this paper,is an issue also for intervention studies. For general stepwise-generatingprocesses, we provide matrix and graphical criteria to decidewhich types of distortion may be present, when they are absentand how they are avoided. We then turn to linear systems withoutother types of distortion, but with indirect confounding. Forsuch systems, the magnitude of distortion in a least-squaresregression coefficient is derived and shown to be estimable,so that it becomes possible to recover the effect of the generatingprocess from the distorted coefficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号