首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1446篇
  免费   89篇
  国内免费   26篇
  1561篇
  2023年   27篇
  2022年   38篇
  2021年   34篇
  2020年   40篇
  2019年   77篇
  2018年   55篇
  2017年   39篇
  2016年   37篇
  2015年   43篇
  2014年   96篇
  2013年   63篇
  2012年   64篇
  2011年   74篇
  2010年   56篇
  2009年   64篇
  2008年   80篇
  2007年   78篇
  2006年   51篇
  2005年   59篇
  2004年   41篇
  2003年   31篇
  2002年   40篇
  2001年   23篇
  2000年   40篇
  1999年   35篇
  1998年   15篇
  1997年   18篇
  1996年   17篇
  1995年   17篇
  1994年   11篇
  1993年   14篇
  1992年   21篇
  1991年   15篇
  1990年   9篇
  1989年   8篇
  1988年   7篇
  1987年   6篇
  1986年   6篇
  1985年   10篇
  1984年   11篇
  1983年   16篇
  1982年   12篇
  1981年   8篇
  1980年   12篇
  1979年   14篇
  1978年   6篇
  1977年   4篇
  1975年   6篇
  1974年   6篇
  1971年   2篇
排序方式: 共有1561条查询结果,搜索用时 15 毫秒
121.
Comparative studies have increased greatly in number in recent years due to advances in statistical and phylogenetic methodologies. For these studies, a trade-off often exists between the number of species that can be included in any given study and the number of individuals examined per species. Here, we describe a simple simulation study examining the effect of intraspecific sample size on statistical error in comparative studies. We find that ignoring measurement error has no effect on type I error of nonphylogenetic analyses, but can lead to increased type I error under some circumstances when using independent contrasts. We suggest using ANOVA to evaluate the relative amounts of within- and between-species variation when considering a phylogenetic comparative study. If within-species variance is particularly large and intraspecific sample sizes small, then either larger sample sizes or comparative methods that account for measurement error are necessary.  相似文献   
122.
In dendroclimatology, testing the stability of transfer functions using cross-calibration verification (CCV) statistics is a common procedure. However, the frequently used statistics reduction of error (RE) and coefficient of efficiency (CE) merely assess the skill of reconstruction for the validation period, which does not necessarily reflect possibly instable regression parameters. Furthermore, the frequently used rigorous threshold of zero which sharply distinguishes between valid and invalid transfer functions is prone to an underestimation of instability. To overcome these drawbacks, we here introduce a new approach – the Bootstrapped Transfer Function Stability test (BTFS). BTFS relies on bootstrapped estimates of the change of model parameters (intercept, slope, and r2) between calibration and verification period as well as the bootstrapped significance of corresponding models. A comparison of BTFS, CCV and a bootstrapped CCV approach (BCCV) applied to 42,000 pseudo-proxy datasets with known properties revealed that BTFS responded more sensitively to instability compared to CCV and BCCV. BTFS performance was significantly affected by sample size (length of calibration period) and noise (explained variance between predictor and predictand). Nevertheless, BTFS performed superior with respect to the detection of instable transfer functions in comparison to CCV.  相似文献   
123.
124.
This experiment determined the chemical composition, rumen degradability (aNDF in stalks and starch in kernels) and in vitro gas production of kernels from three corn hybrids treated (TT) or not treated (control, CTR) with insecticides against the European corn borer (ECB, Ostrinia nubilalis). Two whole-plant silage hybrids belonging to the FAO rating 600 and 700 maturity class (S600 and S700, respectively) and one selected for grain production (G600, FAO rating 600, Dekalb-Monsanto Agricoltura S.p.A., Lodi, Italy) were sown in two main plots (TT and CTR) of an experimental field. Two subsequent treatments of pyrethroids (25 and 1.2 g/ha of cyfluthrin and deltamethrin, respectively) were applied to the TT plots. The insecticide treatment reduced the number of damaged plants (4.5 broken plants/plot versus 0.3 broken plants/plot, P<0.01) and increased the total grain yield by 11% (13.8 t/ha versus 12.4 t/ha), while hybrids did not differ. ECB larvae which bored into the stalk tunnels modified the chemical composition of stalks and kernels. In stalks, total sugars content (i.e. glucose, fructose, sucrose) was about twice that in TT versus CTR plants (123 g/kg versus 60 g/kg DM, P<0.01), while aNDF content was higher in CTR stalks (765 versus 702 g/kg DM, P<0.01). DM degradability after 48 h of incubation of stalks was higher in TT than in CTR, both in vitro (0.360 versus 0.298, P<0.01) and in situ (0.370 versus 0.298, P<0.05), while there were no differences in aNDF degradability. Kernels from TT plots contained less DM (615 g/kg versus 651 g/kg, P<0.01) and more CP (84 g/kg and 78 g/kg DM, P<0.05) than those from CTR plots, while in situ rumen starch disappearance and in vitro gas production were similar. Corn hybrid selected for yield of grain (G600) differed from S600 and S700 due to a higher (P<0.01) content of aNDF, ADF and lignin(sa) in the stalks, and a higher starch content (696 g/kg versus 674 and 671 g/kg DM, P<0.01) and CP (87 g/kg versus 77 and 76 g/kg DM, P<0.05) in grain. The G600 hybrid produced stalks with a lower (P<0.01) aNDF rumen degradability than the S600 and S700.  相似文献   
125.
Jager  Henriette I.  King  Anthony W. 《Ecosystems》2004,7(8):841-847
Applied ecological models that are used to understand and manage natural systems often rely on spatial data as input. Spatial uncertainty in these data can propagate into model predictions. Uncertainty analysis, sensitivity analysis, error analysis, error budget analysis, spatial decision analysis, and hypothesis testing using neutral models are all techniques designed to explore the relationship between variation in model inputs and variation in model predictions. Although similar methods can be used to answer them, these approaches address different questions. These approaches differ in (a) whether the focus is forward or backward (forward to evaluate the magnitude of variation in model predictions propagated or backward to rank input parameters by their influence); (b) whether the question involves model robustness to large variations in spatial pattern or to small deviations from a reference map; and (c) whether processes that generate input uncertainty (for example, cartographic error) are of interest. In this commentary, we propose a taxonomy of approaches, all of which clarify the relationship between spatial uncertainty and the predictions of ecological models. We describe existing techniques and indicate a few areas where research is needed.  相似文献   
126.
Bennett J  Wakefield J 《Biometrics》2001,57(3):803-812
Pharmacokinetic (PK) models describe the relationship between the administered dose and the concentration of drug (and/or metabolite) in the blood as a function of time. Pharmacodynamic (PD) models describe the relationship between the concentration in the blood (or the dose) and the biologic response. Population PK/PD studies aim to determine the sources of variability in the observed concentrations/responses across groups of individuals. In this article, we consider the joint modeling of PK/PD data. The natural approach is to specify a joint model in which the concentration and response data are simultaneously modeled. Unfortunately, this approach may not be optimal if, due to sparsity of concentration data, an overly simple PK model is specified. As an alternative, we propose an errors-in-variables approach in which the observed-concentration data are assumed to be measured with error without reference to a specific PK model. We give an example of an analysis of PK/PD data obtained following administration of an anticoagulant drug. The study was originally carried out in order to make dosage recommendations. The prior for the distribution of the true concentrations, which may incorporate an individual's covariate information, is derived as a predictive distribution from an earlier study. The errors-in-variables approach is compared with the joint modeling approach and more naive methods in which the observed concentrations, or the separately modeled concentrations, are substituted into the response model. Throughout, a Bayesian approach is taken with implementation via Markov chain Monte Carlo methods.  相似文献   
127.
Modification of sample size in group sequential clinical trials   总被引:1,自引:0,他引:1  
Cui L  Hung HM  Wang SJ 《Biometrics》1999,55(3):853-857
In group sequential clinical trials, sample size reestimation can be a complicated issue when it allows for change of sample size to be influenced by an observed sample path. Our simulation studies show that increasing sample size based on an interim estimate of the treatment difference can substantially inflate the probability of type I error in most practical situations. A new group sequential test procedure is developed by modifying the weights used in the traditional repeated significance two-sample mean test. The new test has the type I error probability preserved at the target level and can provide a substantial gain in power with the increase of sample size. Generalization of the new procedure is discussed.  相似文献   
128.
Durban M  Hackett CA  Currie ID 《Biometrics》1999,55(3):699-703
We consider semiparametric models with p regressor terms and q smooth terms. We obtain an explicit expression for the estimate of the regression coefficients given by the back-fitting algorithm. The calculation of the standard errors of these estimates based on this expression is a considerable computational exercise. We present an alternative, approximate method of calculation that is less demanding. With smoothing splines, the method is exact, while with loess, it gives good estimates of standard errors. We assess the adequacy of our approximation and of another approximation with the help of two examples.  相似文献   
129.
The genetic code is not random but instead is organized in such a way that single nucleotide substitutions are more likely to result in changes between similar amino acids. This fidelity, or error minimization, has been proposed to be an adaptation within the genetic code. Many models have been proposed to measure this adaptation within the genetic code. However, we find that none of these consider codon usage differences between species. Furthermore, use of different indices of amino acid physicochemical characteristics leads to different estimations of this adaptation within the code. In this study, we try to establish a more accurate model to address this problem. In our model, a weighting scheme is established for mistranslation biases of the three different codon positions, transition/transversion biases, and codon usage. Different indices of amino acids physicochemical characteristics are also considered. In contrast to pervious work, our results show that the natural genetic code is not fully optimized for error minimization. The genetic code, therefore, is not the most optimized one for error minimization, but one that balances between flexibility and fidelity for different species.  相似文献   
130.
Peek MS  Leffler AJ  Flint SD  Ryel RJ 《Oecologia》2003,137(2):161-170
Oecologia - A recent meta-analysis of meta-analyses by Møller and Jennions (2002, Oecologia 132:492–500) suggested that ecologists using statistical models are explaining between 2.5%...  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号