首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   914篇
  免费   41篇
  国内免费   16篇
  2023年   13篇
  2022年   10篇
  2021年   12篇
  2020年   13篇
  2019年   15篇
  2018年   13篇
  2017年   12篇
  2016年   19篇
  2015年   23篇
  2014年   34篇
  2013年   45篇
  2012年   28篇
  2011年   32篇
  2010年   24篇
  2009年   42篇
  2008年   39篇
  2007年   53篇
  2006年   38篇
  2005年   36篇
  2004年   34篇
  2003年   32篇
  2002年   35篇
  2001年   29篇
  2000年   26篇
  1999年   20篇
  1998年   14篇
  1997年   22篇
  1996年   13篇
  1995年   12篇
  1994年   13篇
  1993年   10篇
  1992年   16篇
  1991年   10篇
  1990年   11篇
  1989年   23篇
  1988年   8篇
  1987年   15篇
  1986年   8篇
  1985年   13篇
  1984年   8篇
  1983年   9篇
  1982年   7篇
  1981年   11篇
  1980年   7篇
  1979年   5篇
  1975年   7篇
  1974年   5篇
  1973年   9篇
  1972年   15篇
  1971年   15篇
排序方式: 共有971条查询结果,搜索用时 15 毫秒
911.
为了确立不同苹果品种在花蕾期到幼果期的抗冻性类型,建立精准苹果晚霜抗冻性评价方法,以宁夏2个主栽苹果品种‘嘎啦’和‘富士’花蕾期、盛花期、坐果期和幼果期的花朵和果实为试验材料,利用野外霜冻试验箱分别模拟不同时期自然降温过程,在20%、50%和80%受冻率下分别建立Logistic方程,确定轻、中、重度受冻的临界温度;并检测各品种不同物候期的过冷却点、结冰点、电导率、可溶性糖和可溶性蛋白等抗寒性指标,结合多元线性回归法综合判断苹果花期至幼果期抗寒能力差异的主要影响因素。结果显示,(1)‘嘎啦’各个物候期的抗寒能力均强于‘富士’;(2)同一苹果品种不同物候期抗寒能力不同,表现为花蕾期>盛花期>坐果期>幼果期;(3)2个苹果品种各物候期轻度、中度、重度受冻临界温度随着物候期推移呈现升高的趋势;(4)受冻率与半致死温度呈极显著正相关(0.909**),与可溶性蛋白含量呈极显著负相关(-0.874**)。研究表明,‘嘎啦’和‘富士’4个物候期花器官相对电导率和可溶性蛋白含量对子房受冻率有较强的响应关系,过冷却点和结冰点温度等对子房受冻率也有一定的响应,可将相对电导率、可溶性蛋白含量、过冷却点和结冰点作为评价苹果抗寒性状的重要指标。  相似文献   
912.
Despite the popularity of discriminant analysis of principal components (DAPC) for studying population structure, there has been little discussion of best practice for this method. In this work, I provide guidelines for standardizing the application of DAPC to genotype data sets. An often overlooked fact is that DAPC generates a model describing genetic differences among a set of populations defined by a researcher. Appropriate parameterization of this model is critical for obtaining biologically meaningful results. I show that the number of leading PC axes used as predictors of among-population differences, paxes, should not exceed the k−1 biologically informative PC axes that are expected for k effective populations in a genotype data set. This k−1 criterion for paxes specification is more appropriate compared to the widely used proportional variance criterion, which often results in a choice of paxesk−1. DAPC parameterized with no more than the leading k−1 PC axes: (i) is more parsimonious; (ii) captures maximal among-population variation on biologically relevant predictors; (iii) is less sensitive to unintended interpretations of population structure; and (iv) is more generally applicable to independent sample sets. Assessing model fit should be routine practice and aids interpretation of population structure. It is imperative that researchers articulate their study goals, that is, testing a priori expectations vs. studying de novo inferred populations, because this has implications on how their DAPC results should be interpreted. The discussion and practical recommendations in this work provide the molecular ecology community with a roadmap for using DAPC in population genetic investigations.  相似文献   
913.
In an observational study, the treatment received and the outcome exhibited may be associated in the absence of an effect caused by the treatment, even after controlling for observed covariates. Two tactics are common: (i) a test for unmeasured bias may be obtained using a secondary outcome for which the effect is known and (ii) a sensitivity analysis may explore the magnitude of unmeasured bias that would need to be present to explain the observed association as something other than an effect caused by the treatment. Can such a test for unmeasured bias inform the sensitivity analysis? If the test for bias does not discover evidence of unmeasured bias, then ask: Are conclusions therefore insensitive to larger unmeasured biases? Conversely, if the test for bias does find evidence of bias, then ask: What does that imply about sensitivity to biases? This problem is formulated in a new way as a convex quadratically constrained quadratic program and solved on a large scale using interior point methods by a modern solver. That is, a convex quadratic function of N variables is minimized subject to constraints on linear and convex quadratic functions of these variables. The quadratic function that is minimized is a statistic for the primary outcome that is a function of the unknown treatment assignment probabilities. The quadratic function that constrains this minimization is a statistic for subsidiary outcome that is also a function of these same unknown treatment assignment probabilities. In effect, the first statistic is minimized over a confidence set for the unknown treatment assignment probabilities supplied by the unaffected outcome. This process avoids the mistake of interpreting the failure to reject a hypothesis as support for the truth of that hypothesis. The method is illustrated by a study of the effects of light daily alcohol consumption on high-density lipoprotein (HDL) cholesterol levels. In this study, the method quickly optimizes a nonlinear function of N = 800 $N=800$ variables subject to linear and quadratic constraints. In the example, strong evidence of unmeasured bias is found using the subsidiary outcome, but, perhaps surprisingly, this finding makes the primary comparison insensitive to larger biases.  相似文献   
914.
The Mechanical characterization of skeletal muscles is strongly dependent on numerous experimental design factors. Nevertheless, significant knowledge gaps remain on the characterization of muscle mechanics and a large number of experiments should be implemented to test the influence of a large number of factors. In this study, we propose a design of experiment method (DOE) to study the parameter sensitivity while minimizing the number of tests. A Box-Behnken design was then implemented to study the influence of strain rate, preconditioning and preloading conditions on visco-hyperelastic mechanical parameters of two rat forearm muscles. The results show that the strain rate affects the visco-hyperelastic parameters for both muscles. These results are consistent with previous work demonstrating that stiffness and viscoelastic contributions increase with strain rate. Thus, DOE has been shown to be a valid method to determine the effect of the experimental conditions on the mechanical behaviour of biological tissues such as skeletal muscle. This method considerably reduces the number of experiments. Indeed, the presented study using 3 parameters at 3 levels would have required at least 54 tests per muscle against 14 for the proposed DOE method.  相似文献   
915.
By a suitable transformation of the pairs of observations obtained in the successive periods of the trial, bioequivalence assessment in a standard comparative bioavailability study reduces to testing for equivalence of two continuous distributions from which unrelated samples are available. Let the two distribution functions be given by F(x) = P[Xx], G(y) = P[Yy] with (X, Y) denoting an independent pair of real-valued random variables. An intuitively appealing way of putting the notion of equivalence of F and G into nonparametric terms can be based on the distance of the functional P[X > Y] from the value it takes if F and G coincide. This leads to the problem of testing the null hypothesis Ho P[X > Y] ≤ 1/2 - ε1 or P[X > Y] ≥ 1/2 + ε2 versus H1 : 1/2 ? ε1 < P[X > Y] < 1/2 + ∈2, with sufficiently small ε1, ε2 ∈ (0, 1/2). The testing procedure we derive for (0, H1) and propose to term Mann-Whitney test for equivalence, consists of carrying out in terms of the U-statistics estimator of P[X > Y] the uniformly most powerful level a test for an interval hypothesis about the mean of a Gaussian distribution with fixed variance. The test is shown to be asymptotically distribution-free with respect to the significance level. In addition, results of an extensive simulation study are presented which suggest that the new test controls the level even with sample sizes as small as 10. For normally distributed data, the loss in power as against the optimal parametric procedure is found to be almost as small as in comparisons between the Mann-Whitney and the t-statistic in the conventional one or two-sided setting, provided the power of the parametric test does not fall short of 80%.  相似文献   
916.
917.
918.
A well‐known problem in classical two‐tailed hypothesis testing is that P‐values go to zero when the sample size goes to infinity, irrespectively of the effect size. This pitfall can make the testing of data consisting of large sample sizes potentially unreliable. In this note, we propose to test for relevant differences to overcome this issue. We illustrate the proposed test on a real data set of about 40 million privately insured patients.  相似文献   
919.
Multiple diagnostic tests are often used due to limited resources or because they provide complementary information on the epidemiology of a disease under investigation. Existing statistical methods to combine prevalence data from multiple diagnostics ignore the potential overdispersion induced by the spatial correlations in the data. To address this issue, we develop a geostatistical framework that allows for joint modelling of data from multiple diagnostics by considering two main classes of inferential problems: (a) to predict prevalence for a gold-standard diagnostic using low-cost and potentially biased alternative tests; (b) to carry out joint prediction of prevalence from multiple tests. We apply the proposed framework to two case studies: mapping Loa loa prevalence in Central and West Africa, using miscroscopy, and a questionnaire-based test called RAPLOA; mapping Plasmodium falciparum malaria prevalence in the highlands of Western Kenya using polymerase chain reaction and a rapid diagnostic test. We also develop a Monte Carlo procedure based on the variogram in order to identify parsimonious geostatistical models that are compatible with the data. Our study highlights (a) the importance of accounting for diagnostic-specific residual spatial variation and (b) the benefits accrued from joint geostatistical modelling so as to deliver more reliable and precise inferences on disease prevalence.  相似文献   
920.
Reliable measures of body composition are essential to develop effective policies to tackle obesity. The lack of an acceptable gold-standard for measuring fatness has made it difficult to evaluate alternative measures of obesity. We use latent class analysis to characterise existing diagnostics. Using data on US adults we show that measures based on body mass index and bioelectrical impedance analysis misclassify large numbers of individuals. For example, 45% of obese White women are misclassified as non-obese using body mass index, while over 50% of non-obese White women are misclassified as being obese using bioelectrical impedance analysis. In contrast the misclassification rates are low when waist circumference is used to measure obesity. These results have important implications for our understanding of differences in obesity rates across time and groups, as well as posing challenges for the econometric analysis of obesity.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号