首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1768篇
  免费   103篇
  国内免费   41篇
  2023年   28篇
  2022年   38篇
  2021年   36篇
  2020年   43篇
  2019年   77篇
  2018年   58篇
  2017年   48篇
  2016年   48篇
  2015年   53篇
  2014年   110篇
  2013年   76篇
  2012年   73篇
  2011年   76篇
  2010年   64篇
  2009年   74篇
  2008年   100篇
  2007年   100篇
  2006年   69篇
  2005年   69篇
  2004年   52篇
  2003年   43篇
  2002年   44篇
  2001年   33篇
  2000年   49篇
  1999年   45篇
  1998年   24篇
  1997年   22篇
  1996年   23篇
  1995年   21篇
  1994年   19篇
  1993年   21篇
  1992年   26篇
  1991年   24篇
  1990年   13篇
  1989年   15篇
  1988年   10篇
  1987年   10篇
  1986年   9篇
  1985年   15篇
  1984年   17篇
  1983年   19篇
  1982年   14篇
  1981年   11篇
  1980年   14篇
  1979年   16篇
  1978年   12篇
  1977年   8篇
  1976年   7篇
  1975年   10篇
  1974年   8篇
排序方式: 共有1912条查询结果,搜索用时 15 毫秒
1.
《Developmental cell》2021,56(22):3082-3099.e5
  1. Download : Download high-res image (280KB)
  2. Download : Download full-size image
  相似文献   
2.
For the estimation of population mean in simple random sampling, an efficient regression-type estimator is proposed which is more efficient than the conventional regression estimator and hence than mean per unit estimator, ratio and product estimators and many other estimators proposed by various authors. Some numerical examples are included for illustration.  相似文献   
3.
On blocking rules for the bootstrap with dependent data   总被引:9,自引:0,他引:9  
We address the issue of optimal block choice in applicationsof the block bootstrap to dependent data. It is shown that optimalblock size depends significantly on context, being equal ton1/3, n1/4 and n1/5 in the cases of variance or bias estimation,estimation of a onesided distribution function, and estimationof a two-sided distribution function, respectively. A clearintuitive explanation of this phenomenon is given, togetherwith outlines of theoretical arguments in specific cases. Itis shown that these orders of magnitude of block sizes can beused to produce a simple, practical rule for selecting blocksize empirically. That technique is explored numerically.  相似文献   
4.
5.
6.
ABSTRACT Telemetry data have been widely used to quantify wildlife habitat relationships despite the fact that these data are inherently imprecise. All telemetry data have positional error, and failure to account for that error can lead to incorrect predictions of wildlife resource use. Several techniques have been used to account for positional error in wildlife studies. These techniques have been described in the literature, but their ability to accurately characterize wildlife resource use has never been tested. We evaluated the performance of techniques commonly used for incorporating telemetry error into studies of wildlife resource use. Our evaluation was based on imprecise telemetry data (mean telemetry error = 174 m, SD = 130 m) typical of field-based studies. We tested 5 techniques in 10 virtual environments and in one real-world environment for categorical (i.e., habitat types) and continuous (i.e., distances or elevations) rasters. Technique accuracy varied by patch size for the categorical rasters, with higher accuracy as patch size increased. At the smallest patch size (1 ha), the technique that ignores error performed best on categorical data (0.31 and 0.30 accuracy for virtual and real data, respectively); however, as patch size increased the bivariate-weighted technique performed better (0.56 accuracy at patch sizes >31 ha) and achieved complete accuracy (i.e., 1.00 accuracy) at smaller patch sizes (472 ha and 1,522 ha for virtual and real data, respectively) than any other technique. We quantified the accuracy of the continuous covariates using the mean absolute difference (MAD) in covariate value between true and estimated locations. We found that average MAD varied between 104 m (ignore telemetry error) and 140 m (rescale the covariate data) for our continuous covariate surfaces across virtual and real data sets. Techniques that rescale continuous covariate data or use a zonal mean on values within a telemetry error polygon were significantly less accurate than other techniques. Although the technique that ignored telemetry error performed best on categorical rasters with smaller average patch sizes (i.e., ≤31 ha) and on continuous rasters in our study, accuracy was so low that the utility of using point-based approaches for quantifying resource use is questionable when telemetry data are imprecise, particularly for small-patch habitat relationships.  相似文献   
7.
8.
The uncertainties in the refined parameters for a 1.5-A X-ray structure of carbon-monoxy (FeII) myoglobin are estimated by combining energy minimization with least-squares refinement against the X-ray data. The energy minimizations, done without reference to the X-ray data, provide perturbed structures which are used to restart conventional X-ray refinement. The resulting refined structures have the same, or better, R-factor and stereochemical parameters as the original X-ray structure, but deviate from it by 0.13 A rms for the backbone atoms and 0.31 A rms for the sidechain atoms. Atoms interacting with a disordered sidechain, Arg 45 CD3, are observed to have larger positional uncertainties. The uncertainty in the B-factors, within the isotropic harmonic motion approximation, is estimated to be 15%. The resulting X-ray structures are more consistent with the energy parameters used in simulations.  相似文献   
9.
The absolute volume of biological objects is often estimated stereologically from an exhaustive set of systematic sections. The usual volume estimator is the sum of the section contents times the distance between sections. For systematic sectioning with a random start, it has been recently shown that is unbiased when m, the ratio between projected object length and section distance, is an integer number (Cruz-Orive 1985). As this quantity is no integer in the real world, we have explored the properties of in the general and realistic situation m . The unbiasedness of under appropriate sampling conditions is demonstrated for the arbitrary compact set in 3 dimensions by a rigorous proof. Exploration of further properties of for the general triaxial ellipsoid leads to a new class of non-elementary real functions with common formal structure which we denote as np-functions. The relative mean square error (CE 2) of in ellipsoids is an oscillating differentiable np-function, which reduces to the known result CE 2= 1/(5m 4) for integer m. As a biological example the absolute volumes of 10 left cardiac ventricles and their internal cavities were estimated from systematic sections. Monte Carlo simulation of replicated systematic sectioning is shown to be improved by using m instead of m . In agreement with the geometric model of ellipsoids with some added shape irregularities, mean empirical CE was proportional to m –1.36 and m–1.73 in the cardiac ventricle and its cavity. The considerable variance reduction by systematic sectioning is shown to be a geometric realization of the principle of antithetic variates.  相似文献   
10.
The binomial sampling to estimate population density of an organism based simply upon the frequency of its occurrence among sampled quadrats is a labour-saving technique which is potentially useful for small animals like insects and has actually been applied occasionally to studies of their populations. The present study provides a theoretical basis for this convenient technique, which makes it statistically reliable and tolerable for consistent use in intensive as well as preliminary population censuses. Firs, the magnitude of sampling error in relation to sample size is formulated mathematically for the estimate to be obtained by this indirect method of census, using either of the two popular models relating frequency of occurrence (p) to mean density (m), i.e. the negative binomial model, p=1−(1+m/k)−k, and the empirical model, p=1−exp(−amb). Then, the equations to calculate sample size and census cost that are necessary to attain a given desired level of precision in the estimation are derived for both models. A notable feature of the relationship of necessary sample size (or census cost) to mean density in the frequency method, in constrast to that in the ordinary census, is that it shows a concave curve which tends to rise sharply not only towards lower but also towards higher levels of density. These theoretical results make it also possible to design sequential estimation procedures based on this convenient census technique, which may enable us with the least necessary cost to get a series of population estimates with the desired precision level. Examples are presented to explain how to apply these programs to acutal censuses in the field.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号