全文获取类型
收费全文 | 4395篇 |
免费 | 467篇 |
国内免费 | 243篇 |
专业分类
5105篇 |
出版年
2024年 | 17篇 |
2023年 | 167篇 |
2022年 | 120篇 |
2021年 | 174篇 |
2020年 | 204篇 |
2019年 | 248篇 |
2018年 | 207篇 |
2017年 | 208篇 |
2016年 | 170篇 |
2015年 | 168篇 |
2014年 | 208篇 |
2013年 | 263篇 |
2012年 | 165篇 |
2011年 | 155篇 |
2010年 | 139篇 |
2009年 | 197篇 |
2008年 | 200篇 |
2007年 | 210篇 |
2006年 | 184篇 |
2005年 | 168篇 |
2004年 | 169篇 |
2003年 | 159篇 |
2002年 | 143篇 |
2001年 | 119篇 |
2000年 | 124篇 |
1999年 | 95篇 |
1998年 | 57篇 |
1997年 | 53篇 |
1996年 | 35篇 |
1995年 | 36篇 |
1994年 | 43篇 |
1993年 | 39篇 |
1992年 | 41篇 |
1991年 | 37篇 |
1990年 | 27篇 |
1989年 | 22篇 |
1988年 | 39篇 |
1987年 | 41篇 |
1986年 | 22篇 |
1985年 | 30篇 |
1984年 | 19篇 |
1983年 | 28篇 |
1982年 | 24篇 |
1981年 | 27篇 |
1980年 | 22篇 |
1979年 | 16篇 |
1978年 | 15篇 |
1977年 | 11篇 |
1976年 | 13篇 |
1971年 | 6篇 |
排序方式: 共有5105条查询结果,搜索用时 0 毫秒
21.
22.
M. J. A. Werger 《Plant Ecology》1982,49(3):187-190
Editing of community data matrices is complementary to analyzing data by multivariate techniques of classification and ordination in the overall task of data analysis. A computer program, DATAEDIT, is described that can perform numerous editing functions, including data transformation, deletion of certain species or samples, deletion of rare species, deletion of outliers, separation of disjunet sample groups, reordering of the species or samples of a data matrix, and the formation of composite samples or of sample subsets. DATAEDIT can use the information in a nonhierarchical or hierarchical classification, and includes its own internal routine for reciprocal averaging ordination.We appreciate valuable suggestions from the late Robert H. Whittaker, and from Philip Dixon, David Hieks, Laura Huenneke, Linda Olsvig-whittaker, and Mark Wilson. Mark O. Hill kindly supplied a fast subroutine for reciprocal averaging. 相似文献
23.
Linear rank tests with right censored data 总被引:6,自引:0,他引:6
24.
Multivariate binary discrimination by the kernel method 总被引:10,自引:0,他引:10
25.
Wong L 《Briefings in bioinformatics》2002,3(4):389-404
The process of building a new database relevant to some field of study in biomedicine involves transforming, integrating and cleansing multiple data sources, as well as adding new material and annotations. This paper reviews some of the requirements of a general solution to this data integration problem. Several representative technologies and approaches to data integration in biomedicine are surveyed. Then some interesting features that separate the more general data integration technologies from the more specialised ones are highlighted. 相似文献
26.
Istem Fer Anthony K. Gardella Alexey N. Shiklomanov Eleanor E. Campbell Elizabeth M. Cowdery Martin G. De Kauwe Ankur Desai Matthew J. Duveneck Joshua B. Fisher Katherine D. Haynes Forrest M. Hoffman Miriam R. Johnston Rob Kooper David S. LeBauer Joshua Mantooth William J. Parton Benjamin Poulter Tristan Quaife Ann Raiho Kevin Schaefer Shawn P. Serbin James Simkins Kevin R. Wilcox Toni Viskari Michael C. Dietze 《Global Change Biology》2021,27(1):13-26
In an era of rapid global change, our ability to understand and predict Earth's natural systems is lagging behind our ability to monitor and measure changes in the biosphere. Bottlenecks to informing models with observations have reduced our capacity to fully exploit the growing volume and variety of available data. Here, we take a critical look at the information infrastructure that connects ecosystem modeling and measurement efforts, and propose a roadmap to community cyberinfrastructure development that can reduce the divisions between empirical research and modeling and accelerate the pace of discovery. A new era of data‐model integration requires investment in accessible, scalable, and transparent tools that integrate the expertise of the whole community, including both modelers and empiricists. This roadmap focuses on five key opportunities for community tools: the underlying foundations of community cyberinfrastructure; data ingest; calibration of models to data; model‐data benchmarking; and data assimilation and ecological forecasting. This community‐driven approach is a key to meeting the pressing needs of science and society in the 21st century. 相似文献
27.
28.
Use of historical data and real-world evidence holds great potential to improve the efficiency of clinical trials. One major challenge is to effectively borrow information from historical data while maintaining a reasonable type I error and minimal bias. We propose the elastic prior approach to address this challenge. Unlike existing approaches, this approach proactively controls the behavior of information borrowing and type I errors by incorporating a well-known concept of clinically significant difference through an elastic function, defined as a monotonic function of a congruence measure between historical data and trial data. The elastic function is constructed to satisfy a set of prespecified criteria such that the resulting prior will strongly borrow information when historical and trial data are congruent, but refrain from information borrowing when historical and trial data are incongruent. The elastic prior approach has a desirable property of being information borrowing consistent, that is, asymptotically controls type I error at the nominal value, no matter that historical data are congruent or not to the trial data. Our simulation study that evaluates the finite sample characteristic confirms that, compared to existing methods, the elastic prior has better type I error control and yields competitive or higher power. The proposed approach is applicable to binary, continuous, and survival endpoints. 相似文献
29.
30.
Jung YY Oh MS Shin DW Kang SH Oh HS 《Biometrical journal. Biometrische Zeitschrift》2006,48(3):435-450
A Bayesian model-based clustering approach is proposed for identifying differentially expressed genes in meta-analysis. A Bayesian hierarchical model is used as a scientific tool for combining information from different studies, and a mixture prior is used to separate differentially expressed genes from non-differentially expressed genes. Posterior estimation of the parameters and missing observations are done by using a simple Markov chain Monte Carlo method. From the estimated mixture model, useful measure of significance of a test such as the Bayesian false discovery rate (FDR), the local FDR (Efron et al., 2001), and the integration-driven discovery rate (IDR; Choi et al., 2003) can be easily computed. The model-based approach is also compared with commonly used permutation methods, and it is shown that the model-based approach is superior to the permutation methods when there are excessive under-expressed genes compared to over-expressed genes or vice versa. The proposed method is applied to four publicly available prostate cancer gene expression data sets and simulated data sets. 相似文献