首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   248篇
  免费   15篇
  国内免费   1篇
  264篇
  2022年   1篇
  2021年   1篇
  2020年   3篇
  2019年   2篇
  2018年   2篇
  2017年   3篇
  2016年   9篇
  2015年   3篇
  2014年   3篇
  2013年   8篇
  2012年   4篇
  2011年   4篇
  2010年   3篇
  2009年   7篇
  2008年   14篇
  2007年   9篇
  2006年   12篇
  2005年   8篇
  2004年   11篇
  2003年   18篇
  2002年   19篇
  2001年   16篇
  2000年   10篇
  1999年   6篇
  1998年   5篇
  1997年   9篇
  1996年   3篇
  1995年   6篇
  1994年   6篇
  1993年   7篇
  1992年   7篇
  1991年   6篇
  1990年   6篇
  1989年   8篇
  1988年   8篇
  1987年   8篇
  1986年   4篇
  1985年   2篇
  1984年   1篇
  1981年   2篇
排序方式: 共有264条查询结果,搜索用时 0 毫秒
131.
This article applies a simple method for settings where one has clustered data, but statistical methods are only available for independent data. We assume the statistical method provides us with a normally distributed estimate, theta, and an estimate of its variance sigma. We randomly select a data point from each cluster and apply our statistical method to this independent data. We repeat this multiple times, and use the average of the associated theta's as our estimate. An estimate of the variance is given by the average of the sigma2's minus the sample variance of the theta's. We call this procedure multiple outputation, as all "excess" data within each cluster is thrown out multiple times. Hoffman, Sen, and Weinberg (2001, Biometrika 88, 1121-1134) introduced this approach for generalized linear models when the cluster size is related to outcome. In this article, we demonstrate the broad applicability of the approach. Applications to angular data, p-values, vector parameters, Bayesian inference, genetics data, and random cluster sizes are discussed. In addition, asymptotic normality of estimates based on all possible outputations, as well as a finite number of outputations, is proven given weak conditions. Multiple outputation provides a simple and broadly applicable method for analyzing clustered data. It is especially suited to settings where methods for clustered data are impractical, but can also be applied generally as a quick and simple tool.  相似文献   
132.
Chao A  Chu W  Hsu CH 《Biometrics》2000,56(2):427-433
We consider a capture-recapture model in which capture probabilities vary with time and with behavioral response. Two inference procedures are developed under the assumption that recapture probabilities bear a constant relationship to initial capture probabilities. These two procedures are the maximum likelihood method (both unconditional and conditional types are discussed) and an approach based on optimal estimating functions. The population size estimators derived from the two procedures are shown to be asymptotically equivalent when population size is large enough. The performance and relative merits of various population size estimators for finite cases are discussed. The bootstrap method is suggested for constructing a variance estimator and confidence interval. An example of the deer mouse analyzed in Otis et al. (1978, Wildlife Monographs 62, 93) is given for illustration.  相似文献   
133.
Parametric methods such as analysis of (co)variance are commonly used for the analysis of data from clinical trials. They have the advantage of providing an easily interpretable measure of treatment efficacy such as a confidence interval for treatment difference. If there are doubts about the underlying distribution of the response variable, however, a nonparametric approach may be called for. The nonparametric approaches in such settings concentrate on hypothesis testing and are not typically used for providing easily interpretable measures of treatment efficacy. For comparing two treatments, we propose using a nonparametric measure based on the likelihood of observing a better response on one treatment than the other. The bootstrap method is used to construct a confidence interval for the treatment difference.  相似文献   
134.
Bioreactor systems involve complex biochemical reactions, which make the systems highly non-linear in nature. Developing model based controllers for such processes require mathematical representations, which are simple, yet capable of capturing the non-linear process characteristics. Continuous bioreactor falls under the class of non-linear systems that exhibit input multiplicity in the optimal operating region, i.e., the operating region where identical outputs are obtained for multiple inputs. Linear modeling techniques are not useful for the referred class of systems for obvious reasons. Even for non-linear modeling techniques, the real bottleneck is to capture the bell-shaped parabolic structure of steady state characteristics exhibited by these systems. The stochastic approach of modeling, which is based on process input/output time-series data, is very useful for this purpose. The aim of this paper is to address the stochastic modeling issues related to bioreactor processes. In this work, three efficient modeling techniques have been studied, viz. block oriented NARMAX structure (Pearson and Pottmann in J Process Control 10:301-315, 2000), Bootstrap structure detection for NARMAX model (Kukreja et al. in Int J Control 77(2):132-143, 2004) and Wavelet-NARMAX model (Billings and Wei in Int J Syst Sci 36(3):137-152, 2005).  相似文献   
135.
In this paper the detection of rare variants association with continuous phenotypes of interest is investigated via the likelihood-ratio based variance component test under the framework of linear mixed models. The hypothesis testing is challenging and nonstandard, since under the null the variance component is located on the boundary of its parameter space. In this situation the usual asymptotic chisquare distribution of the likelihood ratio statistic does not necessarily hold. To circumvent the derivation of the null distribution we resort to the bootstrap method due to its generic applicability and being easy to implement. Both parametric and nonparametric bootstrap likelihood ratio tests are studied. Numerical studies are implemented to evaluate the performance of the proposed bootstrap likelihood ratio test and compare to some existing methods for the identification of rare variants. To reduce the computational time of the bootstrap likelihood ratio test we propose an effective approximation mixture for the bootstrap null distribution. The GAW17 data is used to illustrate the proposed test.  相似文献   
136.
Efron-type measures of prediction error for survival analysis   总被引:3,自引:0,他引:3  
Gerds TA  Schumacher M 《Biometrics》2007,63(4):1283-1287
Estimates of the prediction error play an important role in the development of statistical methods and models, and in their applications. We adapt the resampling tools of Efron and Tibshirani (1997, Journal of the American Statistical Association92, 548-560) to survival analysis with right-censored event times. We find that flexible rules, like artificial neural nets, classification and regression trees, or regression splines can be assessed, and compared to less flexible rules in the same data where they are developed. The methods are illustrated with data from a breast cancer trial.  相似文献   
137.
Biostatisticians, actuaries and demographers are interested in accurately finding the age specific mortality pattern of a population. Certain different approaches have been proposed in the literature for representing the mortality of a population. Among them laws of mortality assuming a specific functional parametrization for the mortality rates have become popular in recent years mainly because of the existence of computers to handle the vast number of computations needed. The uncertainty of such a functional representation has been overlooked. The researcher is interested in both the uncertainty on the parameters and the uncertainty on the curve itself. The former provides information for specific parts of the curve that directly correspond to certain parameters while the latter allows for comparisons over time or space. Manipulation of the uncertainty can be very helpful for prediction purposes. A bootstrap approach is described, as an alternative to standard inferential methods based on asymptotic standard error theory. Such an approach can provide standard errors for both the parameters and the curve as well as it can be used for direct comparison of different curves over time or space. Application to empirical data from Sweden is also provided.  相似文献   
138.
Multiple endpoints are tested to assess an overall treatment effect and also to identify which endpoints or subsets of endpoints contributed to treatment differences. The conventional p‐value adjustment methods, such as single‐step, step‐up, or step‐down procedures, sequentially identify each significant individual endpoint. Closed test procedures can also detect individual endpoints that have effects via a step‐by‐step closed strategy. This paper proposes a global‐based statistic for testing an a priori number, say, r of the k endpoints, as opposed to the conventional approach of testing one (r = 1) endpoint. The proposed test statistic is an extension of the single‐step p‐value‐based statistic based on the distribution of the smallest p‐value. The test maintains strong control of the FamilyWise Error (FWE) rate under the null hypothesis of no difference in any (sub)set of r endpoints among all possible combinations of the k endpoints. After rejecting the null hypothesis, the individual endpoints in the sets that are rejected can be tested further, using a univariate test statistic in a second step, if desired. However, the second step test only weakly controls the FWE. The proposed method is illustrated by application to a psychosis data set.  相似文献   
139.
In medical statistics, many alternative strategies are available for building a prediction model based on training data. Prediction models are routinely compared by means of their prediction performance in independent validation data. If only one data set is available for training and validation, then rival strategies can still be compared based on repeated bootstraps of the same data. Often, however, the overall performance of rival strategies is similar and it is thus difficult to decide for one model. Here, we investigate the variability of the prediction models that results when the same modelling strategy is applied to different training sets. For each modelling strategy we estimate a confidence score based on the same repeated bootstraps. A new decomposition of the expected Brier score is obtained, as well as the estimates of population average confidence scores. The latter can be used to distinguish rival prediction models with similar prediction performances. Furthermore, on the subject level a confidence score may provide useful supplementary information for new patients who want to base a medical decision on predicted risk. The ideas are illustrated and discussed using data from cancer studies, also with high-dimensional predictor space.  相似文献   
140.
The bootstrap method has become a widely used tool applied in diverse areas where results based on asymptotic theory are scarce. It can be applied, for example, for assessing the variance of a statistic, a quantile of interest or for significance testing by resampling from the null hypothesis. Recently, some approaches have been proposed in the biometrical field where hypothesis testing or model selection is performed on a bootstrap sample as if it were the original sample. P‐values computed from bootstrap samples have been used, for example, in the statistics and bioinformatics literature for ranking genes with respect to their differential expression, for estimating the variability of p‐values and for model stability investigations. Procedures which make use of bootstrapped information criteria are often applied in model stability investigations and model averaging approaches as well as when estimating the error of model selection procedures which involve tuning parameters. From the literature, however, there is evidence that p‐values and model selection criteria evaluated on bootstrap data sets do not represent what would be obtained on the original data or new data drawn from the overall population. We explain the reasons for this and, through the use of a real data set and simulations, we assess the practical impact on procedures relevant to biometrical applications in cases where it has not yet been studied. Moreover, we investigate the behavior of subsampling (i.e., drawing from a data set without replacement) as a potential alternative solution to the bootstrap for these procedures.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号