首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   248篇
  免费   15篇
  国内免费   1篇
  2022年   1篇
  2021年   1篇
  2020年   3篇
  2019年   2篇
  2018年   2篇
  2017年   3篇
  2016年   9篇
  2015年   3篇
  2014年   3篇
  2013年   8篇
  2012年   4篇
  2011年   4篇
  2010年   3篇
  2009年   7篇
  2008年   14篇
  2007年   9篇
  2006年   12篇
  2005年   8篇
  2004年   11篇
  2003年   18篇
  2002年   19篇
  2001年   16篇
  2000年   10篇
  1999年   6篇
  1998年   5篇
  1997年   9篇
  1996年   3篇
  1995年   6篇
  1994年   6篇
  1993年   7篇
  1992年   7篇
  1991年   6篇
  1990年   6篇
  1989年   8篇
  1988年   8篇
  1987年   8篇
  1986年   4篇
  1985年   2篇
  1984年   1篇
  1981年   2篇
排序方式: 共有264条查询结果,搜索用时 281 毫秒
21.
We discuss the strengths and weaknesses of the meta-analytic approach to estimating the effect of a new treatment on a true clinical outcome measure, T, from the effect of treatment on a surrogate response, S. The meta-analytic approach (see Daniels and Hughes (1997) 16, 1965-1982) uses data from a series of previous studies of interventions similar to the new treatment. The data are used to estimate relationships between summary measures of treatment effects on T and S that can be used to infer the magnitude of the effect of the new treatment on T from its effects on S. We extend the class of models to cover a broad range of applications in which the parameters define features of the marginal distribution of (T, S). We present a new bootstrap procedure to allow for the variability in estimating the distribution that governs the between-study variation. Ignoring this variability can lead to confidence intervals that are much too narrow. The meta-analytic approach relies on quite different data and assumptions than procedures that depend, for example, on the conditional independence, at the individual level, of treatment and T, given S (see Prentice (1989) 8, 431-440). Meta-analytic calculations in this paper can be used to determine whether a new study, based only on S, will yield estimates of the treatment effect on T that are precise enough to be useful. Compared to direct measurement on T, the meta-analytic approach has a number of limitations, including likely serious loss of precision and difficulties in defining the class of previous studies to be used to predict the effects on T for a new intervention.  相似文献   
22.
Zhou XH  Tu W 《Biometrics》2000,56(4):1118-1125
In this paper, we consider the problem of interval estimation for the mean of diagnostic test charges. Diagnostic test charge data may contain zero values, and the nonzero values can often be modeled by a log-normal distribution. Under such a model, we propose three different interval estimation procedures: a percentile-t bootstrap interval based on sufficient statistics and two likelihood-based confidence intervals. For theoretical properties, we show that the two likelihood-based one-sided confidence intervals are only first-order accurate and that the bootstrap-based one-sided confidence interval is second-order accurate. For two-sided confidence intervals, all three proposed methods are second-order accurate. A simulation study in finite-sample sizes suggests all three proposed intervals outperform a widely used minimum variance unbiased estimator (MVUE)-based interval except for the case of one-sided lower end-point intervals when the skewness is very small. Among the proposed one-sided intervals, the bootstrap interval has the best coverage accuracy. For the two-sided intervals, when the sample size is small, the bootstrap method still yields the best coverage accuracy unless the skewness is very small, in which case the bias-corrected ML method has the best accuracy. When the sample size is large, all three proposed intervals have similar coverage accuracy. Finally, we analyze with the proposed methods one real example assessing diagnostic test charges among older adults with depression.  相似文献   
23.
Concordet D  Nunez OG 《Biometrics》2000,56(4):1040-1046
We propose calibration methods for nonlinear mixed effects models. Using an estimator whose asymptotic properties are known, four different statistics are used to perform the calibration. Simulations are carried out to compare the performance of these statistics. Finally, the milk discard time prediction of an antibiotic, which has motivated this study, is performed on real data.  相似文献   
24.
To examine the time-dependent effects of exposure histories on disease, we estimate a weight function within a generalized linear model. The shape of the weight function, which is modeled as a cubic B-spline, gives information about the impact of exposure increments at different times on disease risk. The method is evaluated in a simulation study and is applied to data on smoking histories and lung cancer from a recent case-control study in Germany.  相似文献   
25.
Chen JJ  Lin KK  Huque M  Arani RB 《Biometrics》2000,56(2):586-592
A typical animal carcinogenicity experiment routinely analyzes approximately 10-30 tumor sites. Comparisons of tumor responses between dosed and control groups and dose-related trend tests are often evaluated for each individual tumor site/type separately. p-Value adjustment approaches have been proposed for controlling the overall Type I error rate or familywise error rate (FWE). However, these adjustments often result in reducing the power to detect a dose effect. This paper proposes using weighted adjustments by assuming that each tumor can be classified as either class A or class B based on prior considerations. The tumors in class A, which are considered as more critical endpoints, are given less adjustment. Two weighted methods of adjustments are presented, the weighted p adjustment and weighted alpha adjustment. A Monte Carlo simulation shows that both weighted adjustments control the FWE well. Furthermore, the power increases if a treatment-dependent tumor is analyzed as in class A tumors and the power decreases if it is analyzed as in class B tumors. A data set from a National Toxicology Program (NTP) 2-year animal carcinogenicity experiment with 13 tumor types/sites observed in male mice was analyzed using the proposed methods. The modified poly-3 test was used to test for increased carcinogenicity since it has been adopted by the NTP as a standard test for a dose-related trend. The unweighted adjustment analysis concluded that there was no statistically significant dose-related trend. Using the Food and Drug Administration classification scheme for the weighted adjustment analyses, two rare tumors (with background rates of 1% or less) were analyzed as class A tumors and 11 common tumors (with background rates higher than 1%) as class B. Both weighted analyses showed a significant dose-related trend for one rare tumor.  相似文献   
26.
A note on generating correlated binary variables   总被引:1,自引:0,他引:1  
LUNN  A. D.; DAVIES  S. J. 《Biometrika》1998,85(2):487-490
  相似文献   
27.
Summary Methods for performing multiple tests of paired proportions are described. A broadly applicable method using McNemar's exact test and the exact distributions of all test statistics is developed; the method controls the familywise error rate in the strong sense under minimal assumptions. A closed form (not simulation‐based) algorithm for carrying out the method is provided. A bootstrap alternative is developed to account for correlation structures. Operating characteristics of these and other methods are evaluated via a simulation study. Applications to multiple comparisons of predictive models for disease classification and to postmarket surveillance of adverse events are given.  相似文献   
28.
The currently used criterion for sample size calculation in a reference interval study is not well stated and leads to imprecise control of the ratio in question. We propose a generalization of the criterion used to determine sufficient sample size in reference interval studies. The generalization allows better estimation of the required sample size when the reference interval estimation will be using a power transformation or is nonparametric. Bootstrap methods are presented to estimate sample sizes required by the generalized criterion. Simulation of several distributions both symmetric and positively skewed is presented to compare the sample size estimators. The new method is illustrated on a data set of plasma glucose values from a 50‐g oral glucose tolerance test. It is seen that the sample sizes calculated from the generalized criterion leads to more reliable control of the desired ratio.  相似文献   
29.
In this paper, we focus discussion on testing the homogeneity of risk difference for sparse data, in which we have few patients in each stratum, but a moderate or large number of strata. When the number of patients per treatment within strata is small (2 to 5 patients), none of test procedures proposed previously for testing the homogeneity of risk difference for sparse data can really perform well. On the basis of bootstrap methods, we develop a simple test procedure that can improve the power of the previous test procedures. Using Monte Carlo simulations, we demonstrate that the test procedure developed here can perform reasonable well with respect to Type I error even when the number of patients per stratum for each treatment is as small as two patients. We evaluate and study the power of the proposed test procedure in a variety of situations. We also include a comparison of the performance between the test statistics proposed elsewhere and the test procedure developed here. Finally, we briefly discuss the limitation of using the proposed test procedure. We use the data comparing two chemotherapy treatments in patients with multiple myeloma to illustrate the use of the proposed test procedure.  相似文献   
30.
Yin G  Cai J 《Biometrics》2005,61(1):151-161
As an alternative to the mean regression model, the quantile regression model has been studied extensively with independent failure time data. However, due to natural or artificial clustering, it is common to encounter multivariate failure time data in biomedical research where the intracluster correlation needs to be accounted for appropriately. For right-censored correlated survival data, we investigate the quantile regression model and adapt an estimating equation approach for parameter estimation under the working independence assumption, as well as a weighted version for enhancing the efficiency. We show that the parameter estimates are consistent and asymptotically follow normal distributions. The variance estimation using asymptotic approximation involves nonparametric functional density estimation. We employ the bootstrap and perturbation resampling methods for the estimation of the variance-covariance matrix. We examine the proposed method for finite sample sizes through simulation studies, and illustrate it with data from a clinical trial on otitis media.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号