首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   332篇
  免费   16篇
  国内免费   9篇
  2023年   2篇
  2022年   6篇
  2021年   5篇
  2020年   4篇
  2019年   5篇
  2018年   4篇
  2017年   10篇
  2016年   11篇
  2015年   6篇
  2014年   12篇
  2013年   12篇
  2012年   6篇
  2011年   24篇
  2010年   10篇
  2009年   13篇
  2008年   17篇
  2007年   20篇
  2006年   20篇
  2005年   18篇
  2004年   14篇
  2003年   13篇
  2002年   14篇
  2001年   13篇
  2000年   8篇
  1999年   18篇
  1998年   3篇
  1997年   3篇
  1996年   4篇
  1995年   3篇
  1994年   4篇
  1993年   5篇
  1992年   3篇
  1991年   3篇
  1990年   3篇
  1988年   2篇
  1987年   4篇
  1986年   2篇
  1985年   5篇
  1984年   4篇
  1983年   5篇
  1982年   2篇
  1981年   1篇
  1980年   1篇
  1979年   4篇
  1978年   3篇
  1975年   2篇
  1974年   2篇
  1973年   1篇
  1972年   1篇
  1970年   1篇
排序方式: 共有357条查询结果,搜索用时 16 毫秒
11.
Bennett J  Wakefield J 《Biometrics》2001,57(3):803-812
Pharmacokinetic (PK) models describe the relationship between the administered dose and the concentration of drug (and/or metabolite) in the blood as a function of time. Pharmacodynamic (PD) models describe the relationship between the concentration in the blood (or the dose) and the biologic response. Population PK/PD studies aim to determine the sources of variability in the observed concentrations/responses across groups of individuals. In this article, we consider the joint modeling of PK/PD data. The natural approach is to specify a joint model in which the concentration and response data are simultaneously modeled. Unfortunately, this approach may not be optimal if, due to sparsity of concentration data, an overly simple PK model is specified. As an alternative, we propose an errors-in-variables approach in which the observed-concentration data are assumed to be measured with error without reference to a specific PK model. We give an example of an analysis of PK/PD data obtained following administration of an anticoagulant drug. The study was originally carried out in order to make dosage recommendations. The prior for the distribution of the true concentrations, which may incorporate an individual's covariate information, is derived as a predictive distribution from an earlier study. The errors-in-variables approach is compared with the joint modeling approach and more naive methods in which the observed concentrations, or the separately modeled concentrations, are substituted into the response model. Throughout, a Bayesian approach is taken with implementation via Markov chain Monte Carlo methods.  相似文献   
12.
Exposure to infection information is important for estimating vaccine efficacy, but it is difficult to collect and prone to missingness and mismeasurement. We discuss study designs that collect detailed exposure information from only a small subset of participants while collecting crude exposure information from all participants and treat estimation of vaccine efficacy in the missing data/measurement error framework. We extend the discordant partner design for HIV vaccine trials of Golm, Halloran, and Longini (1998, Statistics in Medicine, 17, 2335-2352.) to the more complex augmented trial design of Longini, Datta, and Halloran (1996, Journal of Acquired Immune Deficiency Syndromes and Human Retrovirology 13, 440-447) and Datta, Halloran, and Longini (1998, Statistics in Medicine 17, 185-200). The model for this design includes three exposure covariates and both univariate and bivariate outcomes. We adapt recently developed semiparametric missing data methods of Reilly and Pepe (1995, Biometrika 82, 299 314), Carroll and Wand (1991, Journal of the Royal Statistical Society, Series B 53, 573-585), and Pepe and Fleming (1991, Journal of the American Statistical Association 86, 108-113) to the augmented vaccine trial design. We demonstrate with simulated HIV vaccine trial data the improvements in bias and efficiency when combining the different levels of exposure information to estimate vaccine efficacy for reducing both susceptibility and infectiousness. We show that the semiparametric methods estimate both efficacy parameters without bias when the good exposure information is either missing completely at random or missing at random. The pseudolikelihood method of Carroll and Wand (1991) and Pepe and Fleming (1991) was the more efficient of the two semiparametric methods.  相似文献   
13.
Conductance and relaxations of gelatin films in glassy and rubbery states   总被引:1,自引:0,他引:1  
The dielectric constant, ′, and the dielectric loss, ″, for gelatin films were measured in the glassy and rubbery states over a frequency range from 20 Hz to 10 MHz; ′ and ″ were transformed into M* formalism (M*=1/(′−i″)=M′+iM″; i, the imaginary unit). The peak of ″ was masked probably due to dc conduction, but the peak of M″, e.g. the conductivity relaxation, for the gelatin used was observed. By fitting the M″ data to the Havriliak–Negami type equation, the relaxation time, τHN, was evaluated. The value of the activation energy, Eτ, evaluated from an Arrhenius plot of 1/τHN, agreed well with that of Eσ evaluated from the DC conductivity σ0 both in the glassy and rubbery states, indicating that the conductivity relaxation observed for the gelatin films was ascribed to ionic conduction. The value of the activation energy in the glassy state was larger than that in the rubbery state.  相似文献   
14.
We present the application of a nonparametric method to performing functional principal component analysis for functional curve data that consist of measurements of a random trajectory for a sample of subjects. This design typically consists of an irregular grid of time points on which repeated measurements are taken for a number of subjects. We introduce shrinkage estimates for the functional principal component scores that serve as the random effects in the model. Scatterplot smoothing methods are used to estimate the mean function and covariance surface of this model. We propose improved estimation in the neighborhood of and at the diagonal of the covariance surface, where the measurement errors are reflected. The presence of additive measurement errors motivates shrinkage estimates for the functional principal component scores. Shrinkage estimates are developed through best linear prediction and in a generalized version, aiming at minimizing one-curve-leave-out prediction error. The estimation of individual trajectories combines data obtained from that individual as well as all other individuals. We apply our methods to new data regarding the analysis of the level of 14C-folate in plasma as a function of time since dosing of healthy adults with a small tracer dose of 14C-folic acid. A time transformation was incorporated to handle design irregularity concerning the time points on which the measurements were taken. The proposed methodology, incorporating shrinkage and data-adaptive features, is seen to be well suited for describing population kinetics of 14C-folate-specific activity and random effects, and can also be applied to other functional data analysis problems.  相似文献   
15.
Dunson DB  Watson M  Taylor JA 《Biometrics》2003,59(2):296-304
Often a response of interest cannot be measured directly and it is necessary to rely on multiple surrogates, which can be assumed to be conditionally independent given the latent response and observed covariates. Latent response models typically assume that residual densities are Gaussian. This article proposes a Bayesian median regression modeling approach, which avoids parametric assumptions about residual densities by relying on an approximation based on quantiles. To accommodate within-subject dependency, the quantile response categories of the surrogate outcomes are related to underlying normal variables, which depend on a latent normal response. This underlying Gaussian covariance structure simplifies interpretation and model fitting, without restricting the marginal densities of the surrogate outcomes. A Markov chain Monte Carlo algorithm is proposed for posterior computation, and the methods are applied to single-cell electrophoresis (comet assay) data from a genetic toxicology study.  相似文献   
16.
We construct Bayesian methods for semiparametric modeling of a monotonic regression function when the predictors are measured with classical error. Berkson error, or a mixture of the two. Such methods require a distribution for the unobserved (latent) predictor, a distribution we also model semiparametrically. Such combinations of semiparametric methods for the dose response as well as the latent variable distribution have not been considered in the measurement error literature for any form of measurement error. In addition, our methods represent a new approach to those problems where the measurement error combines Berkson and classical components. While the methods are general, we develop them around a specific application, namely, the study of thyroid disease in relation to radiation fallout from the Nevada test site. We use this data to illustrate our methods, which suggest a point estimate (posterior mean) of relative risk at high doses nearly double that of previous analyses but that also suggest much greater uncertainty in the relative risk.  相似文献   
17.
Xue QL  Bandeen-Roche K 《Biometrics》2002,58(1):110-120
This work was motivated by the need to combine outcome information from a reference population with risk factor information from a screened subpopulation in a setting where the analytic goal was to study the association between risk factors and multiple binary outcomes. To achieve such an analytic goal, this article proposes a two-stage latent class procedure that first summarizes the commonalities among outcomes using a reference population sample, then analyzes the association between outcomes and risk factors. It develops a pseudo-maximum likelihood approach to estimating model parameters. The performance of the proposed method is evaluated in a simulation study and in an illustrative analysis of data from the Women's Health and Aging Study, a recent investigation of the causes and course of disability in older women. Combining information in the proposed way is found to improve both accuracy and precision in summarizing multiple categorical outcomes, which effectively diminishes ambiguity and bias in making risk factor inferences.  相似文献   
18.
Hierarchical modeling is becoming increasingly popular in epidemiology, particularly in air pollution studies. When potential confounding exists, a multilevel model yields better power to assess the independent effects of each predictor by gathering evidence across many sub-studies. If the predictors are measured with unknown error, bias can be expected in the individual substudies, and in the combined estimates of the second-stage model. We consider two alternative methods for estimating the independent effects of two predictors in a hierarchical model. We show both analytically and via simulation that one of these gives essentially unbiased estimates even in the presence of measurement error, at the price of a moderate reduction in power. The second avoids the potential for upward bias, at the price of a smaller reduction in power. Since measurement error is endemic in epidemiology, these approaches hold considerable potential. We illustrate the two methods by applying them to two air pollution studies. In the first, we re-analyze published data to show that the estimated effect of fine particles on daily deaths, independent of coarse particles, was downwardly biased by measurement error in the original analysis. The estimated effect of coarse particles becomes more protective using the new estimates. In the second example, we use published data on the association between airborne particles and daily deaths in 10 US cities to estimate the effect of gaseous air pollutants on daily deaths. The resulting effect size estimates were very small and the confidence intervals included zero.  相似文献   
19.
Pattee HH 《Bio Systems》2001,60(1-3):5-21
Evolution requires the genotype–phenotype distinction, a primeval epistemic cut that separates energy-degenerate, rate-independent genetic symbols from the rate-dependent dynamics of construction that they control. This symbol–matter or subject–object distinction occurs at all higher levels where symbols are related to a referent by an arbitrary code. The converse of control is measurement in which a rate-dependent dynamical state is coded into quiescent symbols. Non-integrable constraints are one necessary condition for bridging the epistemic cut by measurement, control, and coding. Additional properties of heteropolymer constraints are necessary for biological evolution.  相似文献   
20.
Multistate models have been increasingly used to model natural history of many diseases as well as to characterize the follow-up of patients under varied clinical protocols. This modeling allows describing disease evolution, estimating the transition rates, and evaluating the therapy effects on progression. In many cases, the staging is defined on the basis of a discretization of the values of continuous markers (CD4 cell count for HIV application) that are subject to great variability due mainly to short time-scale noise (intraindividual variability) and measurement errors. This led us to formulate a Bayesian hierarchical model where, at a first level, a disease process (Markov model on the true states, which are unobserved) is introduced and, at a second level, the measurement process making the link between the true states and the observed marker values is modeled. This hierarchical formulation allows joint estimation of the parameters of both processes. Estimation of the quantities of interest is performed via stochastic algorithms of the family of Markov chain Monte Carlo methods. The flexibility of this approach is illustrated by analyzing the CD4 data on HIV patients of the Concorde clinical trial.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号