首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1518篇
  免费   102篇
  国内免费   32篇
  2023年   31篇
  2022年   39篇
  2021年   39篇
  2020年   43篇
  2019年   82篇
  2018年   59篇
  2017年   43篇
  2016年   43篇
  2015年   45篇
  2014年   99篇
  2013年   69篇
  2012年   66篇
  2011年   76篇
  2010年   56篇
  2009年   69篇
  2008年   89篇
  2007年   79篇
  2006年   54篇
  2005年   64篇
  2004年   45篇
  2003年   35篇
  2002年   44篇
  2001年   25篇
  2000年   42篇
  1999年   37篇
  1998年   15篇
  1997年   18篇
  1996年   18篇
  1995年   17篇
  1994年   12篇
  1993年   15篇
  1992年   21篇
  1991年   14篇
  1990年   9篇
  1989年   10篇
  1988年   6篇
  1987年   6篇
  1986年   6篇
  1985年   10篇
  1984年   11篇
  1983年   16篇
  1982年   12篇
  1981年   7篇
  1980年   13篇
  1979年   14篇
  1978年   6篇
  1977年   4篇
  1975年   6篇
  1974年   6篇
  1971年   2篇
排序方式: 共有1652条查询结果,搜索用时 31 毫秒
1.
《Developmental cell》2021,56(22):3082-3099.e5
  1. Download : Download high-res image (280KB)
  2. Download : Download full-size image
  相似文献   
2.
For the estimation of population mean in simple random sampling, an efficient regression-type estimator is proposed which is more efficient than the conventional regression estimator and hence than mean per unit estimator, ratio and product estimators and many other estimators proposed by various authors. Some numerical examples are included for illustration.  相似文献   
3.
Realistic power calculations for large cohort studies and nested case control studies are essential for successfully answering important and complex research questions in epidemiology and clinical medicine. For this, we provide a methodical framework for general realistic power calculations via simulations that we put into practice by means of an R‐based template. We consider staggered recruitment and individual hazard rates, competing risks, interaction effects, and the misclassification of covariates. The study cohort is assembled with respect to given age‐, gender‐, and community distributions. Nested case‐control analyses with a varying number of controls enable comparisons of power with a full cohort analysis. Time‐to‐event generation under competing risks, including delayed study‐entry times, is realized on the basis of a six‐state Markov model. Incidence rates, prevalence of risk factors and prefixed hazard ratios allow for the assignment of age‐dependent transition rates given in the form of Cox models. These provide the basis for a central simulation‐algorithm, which is used for the generation of sample paths of the underlying time‐inhomogeneous Markov processes. With the inclusion of frailty terms into the Cox models the Markov property is specifically biased. An “individual Markov process given frailty” creates some unobserved heterogeneity between individuals. Different left‐truncation‐ and right‐censoring patterns call for the use of Cox models for data analysis. p‐values are recorded over repeated simulation runs to allow for the desired power calculations. For illustration, we consider scenarios with a “testing” character as well as realistic scenarios. This enables the validation of a correct implementation of theoretical concepts and concrete sample size recommendations against an actual epidemiological background, here given with possible substudy designs within the German National Cohort.  相似文献   
4.
On blocking rules for the bootstrap with dependent data   总被引:9,自引:0,他引:9  
We address the issue of optimal block choice in applicationsof the block bootstrap to dependent data. It is shown that optimalblock size depends significantly on context, being equal ton1/3, n1/4 and n1/5 in the cases of variance or bias estimation,estimation of a onesided distribution function, and estimationof a two-sided distribution function, respectively. A clearintuitive explanation of this phenomenon is given, togetherwith outlines of theoretical arguments in specific cases. Itis shown that these orders of magnitude of block sizes can beused to produce a simple, practical rule for selecting blocksize empirically. That technique is explored numerically.  相似文献   
5.
6.
ABSTRACT Telemetry data have been widely used to quantify wildlife habitat relationships despite the fact that these data are inherently imprecise. All telemetry data have positional error, and failure to account for that error can lead to incorrect predictions of wildlife resource use. Several techniques have been used to account for positional error in wildlife studies. These techniques have been described in the literature, but their ability to accurately characterize wildlife resource use has never been tested. We evaluated the performance of techniques commonly used for incorporating telemetry error into studies of wildlife resource use. Our evaluation was based on imprecise telemetry data (mean telemetry error = 174 m, SD = 130 m) typical of field-based studies. We tested 5 techniques in 10 virtual environments and in one real-world environment for categorical (i.e., habitat types) and continuous (i.e., distances or elevations) rasters. Technique accuracy varied by patch size for the categorical rasters, with higher accuracy as patch size increased. At the smallest patch size (1 ha), the technique that ignores error performed best on categorical data (0.31 and 0.30 accuracy for virtual and real data, respectively); however, as patch size increased the bivariate-weighted technique performed better (0.56 accuracy at patch sizes >31 ha) and achieved complete accuracy (i.e., 1.00 accuracy) at smaller patch sizes (472 ha and 1,522 ha for virtual and real data, respectively) than any other technique. We quantified the accuracy of the continuous covariates using the mean absolute difference (MAD) in covariate value between true and estimated locations. We found that average MAD varied between 104 m (ignore telemetry error) and 140 m (rescale the covariate data) for our continuous covariate surfaces across virtual and real data sets. Techniques that rescale continuous covariate data or use a zonal mean on values within a telemetry error polygon were significantly less accurate than other techniques. Although the technique that ignored telemetry error performed best on categorical rasters with smaller average patch sizes (i.e., ≤31 ha) and on continuous rasters in our study, accuracy was so low that the utility of using point-based approaches for quantifying resource use is questionable when telemetry data are imprecise, particularly for small-patch habitat relationships.  相似文献   
7.
8.
The uncertainties in the refined parameters for a 1.5-A X-ray structure of carbon-monoxy (FeII) myoglobin are estimated by combining energy minimization with least-squares refinement against the X-ray data. The energy minimizations, done without reference to the X-ray data, provide perturbed structures which are used to restart conventional X-ray refinement. The resulting refined structures have the same, or better, R-factor and stereochemical parameters as the original X-ray structure, but deviate from it by 0.13 A rms for the backbone atoms and 0.31 A rms for the sidechain atoms. Atoms interacting with a disordered sidechain, Arg 45 CD3, are observed to have larger positional uncertainties. The uncertainty in the B-factors, within the isotropic harmonic motion approximation, is estimated to be 15%. The resulting X-ray structures are more consistent with the energy parameters used in simulations.  相似文献   
9.
Summary Heat induces a number of premutational lesions (for example, the deamination of cytosine to uracil) in DNA and RNA. These kinds of errors occur in resting as well as replicating polynucleotides. However, an increase in temperature also raises the probability of copying error occurring in nucleic acids because of increased thermal noise in the replicative machinery. In most modern genetic systems, the majority of heat-induced lesions are efficiently repaired. It follows that the importance of heat-induced error increases as the effectiveness of repair declines. We show in this paper that the error rate of enzymatic polynucleotide copying is expected to increase monotonically with temperature. We also explore the effects of temperature variations on the early evolution of biological information transmission mechanisms.  相似文献   
10.
The effects of truncating long-range forces on protein dynamics   总被引:8,自引:0,他引:8  
This paper considers the effects of truncating long-range forces on protein dynamics. Six methods of truncation that we investigate as a function of cutoff criterion of the long-range potentials are (1) a shifted potential; (2) a switching function; (3) simple atom-atom truncation based on distance; (4) simple atom-atom truncation based on a list which is updated periodically (every 25 steps); (5) simple group-group truncation based on distance; and (6) simple group-group truncation based on a list which is updated periodically (every 25 steps). Based on 70 calculations of carboxymyoglobin we show that the method and distance of long range cutoff have a dramatic effect on overall protein behavior. Evaluation of the different methods is based on comparison of a simulation's rms fluctuation about the average coordinates, the rms deviation from the average coordinates of a no cutoff simulation and from the X-ray structure of the protein. The simulations in which long-range forces are truncated by a shifted potential shows large rms deviations for cutoff criteria less than 14 A, and reasonable deviations and fluctuations at this cutoff distance or larger. Simulations using a switching function are investigated by varying the range over which electrostatic interactions are switched off. Results using a short switching function that switches off the potential over a short range of distances are poor for all cutoff distances. A switching function over a 5-9 A range gives reasonable results for a distance-dependent dielectric, but not using a constant dielectric. Both the atom-atom and group-group truncation methods based on distance shows large rms deviation and fluctuation for short cutoff distances, while for cutoff distances of 11 A or greater, reasonable results are achieved. Although comparison of these to distance-based truncation methods show surprisingly larger rms deviations for the group-group truncation, contrary to simulation studies of aqueous ionic solutions. The results of atom-atom or group-group list-based simulations generally appear to be less stable than the distance-based simulations, and require more frequent velocity scaling or stronger coupling to a heat bath.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号