首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2557篇
  免费   148篇
  国内免费   72篇
  2777篇
  2024年   5篇
  2023年   29篇
  2022年   38篇
  2021年   79篇
  2020年   94篇
  2019年   68篇
  2018年   94篇
  2017年   82篇
  2016年   90篇
  2015年   81篇
  2014年   124篇
  2013年   256篇
  2012年   63篇
  2011年   77篇
  2010年   94篇
  2009年   114篇
  2008年   127篇
  2007年   134篇
  2006年   112篇
  2005年   83篇
  2004年   92篇
  2003年   99篇
  2002年   85篇
  2001年   72篇
  2000年   65篇
  1999年   75篇
  1998年   42篇
  1997年   37篇
  1996年   39篇
  1995年   32篇
  1994年   32篇
  1993年   27篇
  1992年   25篇
  1991年   16篇
  1990年   17篇
  1989年   19篇
  1988年   8篇
  1987年   18篇
  1986年   9篇
  1985年   18篇
  1984年   20篇
  1983年   10篇
  1982年   17篇
  1980年   12篇
  1978年   10篇
  1977年   4篇
  1976年   7篇
  1975年   7篇
  1974年   4篇
  1971年   4篇
排序方式: 共有2777条查询结果,搜索用时 15 毫秒
11.
Web surveys have replaced Face-to-Face and computer assisted telephone interviewing (CATI) as the main mode of data collection in most countries. This trend was reinforced as a consequence of COVID-19 pandemic-related restrictions. However, this mode still faces significant limitations in obtaining probability-based samples of the general population. For this reason, most web surveys rely on nonprobability survey designs. Whereas probability-based designs continue to be the gold standard in survey sampling, nonprobability web surveys may still prove useful in some situations. For instance, when small subpopulations are the group under study and probability sampling is unlikely to meet sample size requirements, complementing a small probability sample with a larger nonprobability one may improve the efficiency of the estimates. Nonprobability samples may also be designed as a mean for compensating for known biases in probability-based web survey samples by purposely targeting respondent profiles that tend to be underrepresented in these surveys. This is the case in the Survey on the impact of the COVID-19 pandemic in Spain (ESPACOV) that motivates this paper. In this paper, we propose a methodology for combining probability and nonprobability web-based survey samples with the help of machine-learning techniques. We then assess the efficiency of the resulting estimates by comparing them with other strategies that have been used before. Our simulation study and the application of the proposed estimation method to the second wave of the ESPACOV Survey allow us to conclude that this is the best option for reducing the biases observed in our data.  相似文献   
12.
13.
The absolute volume of biological objects is often estimated stereologically from an exhaustive set of systematic sections. The usual volume estimator is the sum of the section contents times the distance between sections. For systematic sectioning with a random start, it has been recently shown that is unbiased when m, the ratio between projected object length and section distance, is an integer number (Cruz-Orive 1985). As this quantity is no integer in the real world, we have explored the properties of in the general and realistic situation m . The unbiasedness of under appropriate sampling conditions is demonstrated for the arbitrary compact set in 3 dimensions by a rigorous proof. Exploration of further properties of for the general triaxial ellipsoid leads to a new class of non-elementary real functions with common formal structure which we denote as np-functions. The relative mean square error (CE 2) of in ellipsoids is an oscillating differentiable np-function, which reduces to the known result CE 2= 1/(5m 4) for integer m. As a biological example the absolute volumes of 10 left cardiac ventricles and their internal cavities were estimated from systematic sections. Monte Carlo simulation of replicated systematic sectioning is shown to be improved by using m instead of m . In agreement with the geometric model of ellipsoids with some added shape irregularities, mean empirical CE was proportional to m –1.36 and m–1.73 in the cardiac ventricle and its cavity. The considerable variance reduction by systematic sectioning is shown to be a geometric realization of the principle of antithetic variates.  相似文献   
14.
Type 1 insulin-like growth factor receptor (IGF1R) is a membrane-spanning glycoprotein of the insulin receptor family that has been implicated in a variety of cancers. The key questions related to molecular mechanisms governing ligand recognition by IGF1R remain unanswered, partly due to the lack of testable structural models of apo or ligand-bound receptor complexes. Using a homology model of the IGF1R ectodomain IGF1RΔβ, we present the first experimentally consistent all-atom structural models of IGF1/IGF1RΔβ and IGF2/IGF1RΔβ complexes. Our explicit-solvent molecular dynamics (MD) simulation of apo-IGF1RΔβ shows that it displays asymmetric flexibility mechanisms that result in one of two binding pockets accessible to growth factors IGF1 and IGF2, as demonstrated via an MD-assisted Monte Carlo docking procedure. Our MD-generated ensemble of structures of apo and IGF1-bound IGF1RΔβ agrees reasonably well with published small-angle X-ray scattering data. We observe simultaneous contacts of each growth factor with sites 1 and 2 of IGF1R, suggesting cross-linking of receptor subunits. Our models provide direct evidence in favor of suggested electrostatic complementarity between the C-domain (IGF1) and the cysteine-rich domain (IGF1R). Our IGF1/IGF1RΔβ model provides structural bases for the observation that a single IGF1 molecule binds to IGF1RΔβ at low concentrations in small-angle X-ray scattering studies. We also suggest new possible structural bases for differences in the affinities of insulin, IGF1, and IGF2 for their noncognate receptors.  相似文献   
15.
The purpose of this note is to illustrate the feasibility of simulating kinetic systems, such as commonly encountered in photosynthesis research, using the Monte Carlo (MC) method. In this approach, chemical events are considered at the molecular level where they occur randomly and the macroscopic kinetic evolution results from averaging a large number of such events. Their repeated simulation is easily accomplished using digital computing. It is shown that the MC approach is well suited to the capabilities and resources of modern microcomputers. A software package is briefly described and discussed, allowing a simple programming of any kinetic model system and its resolution. The execution is reasonably fast and accurate; it is not subject to such instabilities as found with the conventional analytical approach.Abbreviations MC Monte Carlo - RN random number - PSU photosynthetic unit Dedicated to Prof. L.N.M. Duysens on the occasion of his retirement.  相似文献   
16.
Summary functions such as the empty space function F and the nearest neighbour distance distribution function G are often used as test statistics for point patterns. Van Lieshout and Baddeley recently proposed an alternative statistic, the J-function, which is defined as J = (1 — G)/(1 — F). Theoretical advantages of the J-function over the F- and G-statistics are that it measures the type, strength and range of interaction, and that it can be evaluated explicitly for a larger class of models. In this simulation study we investigate empirically how the power of tests based on J compares to that of tests based on F and G.  相似文献   
17.
In the decade since their invention, spotted microarrays have been undergoing technical advances that have increased the utility, scope and precision of their ability to measure gene expression. At the same time, more researchers are taking advantage of the fundamentally quantitative nature of these tools with refined experimental designs and sophisticated statistical analyses. These new approaches utilise the power of microarrays to estimate differences in gene expression levels, rather than just categorising genes as up- or down-regulated, and allow the comparison of expression data across multiple samples. In this review, some of the technical aspects of spotted microarrays that can affect statistical inference are highlighted, and a discussion is provided of how several methods for estimating gene expression level across multiple samples deal with these challenges. The focus is on a Bayesian analysis method, BAGEL, which is easy to implement and produces easily interpreted results.  相似文献   
18.
Transgenic mice are widely used in biomedical research to study gene expression, developmental biology, and gene therapy models. Bacterial artificial chromosome (BAC) transgenes direct gene expression at physiological levels with the same developmental timing and expression patterns as endogenous genes in transgenic animal models. We generated 707 transgenic founders from 86 BAC transgenes purified by three different methods. Transgenesis efficiency was the same for all BAC DNA purification methods. Polyamine microinjection buffer was essential for successful integration of intact BAC transgenes. There was no correlation between BAC size and transgenic rate, birth rate, or transgenic efficiency. A narrow DNA concentration range generated the best transgenic efficiency. High DNA concentrations reduced birth rates while very low concentrations resulted in higher birth rates and lower transgenic efficiency. Founders with complete BAC integrations were observed in all 47 BACs for which multiple markers were tested. Additional founders with BAC fragment integrations were observed for 65% of these BACs. Expression data was available for 79 BAC transgenes and expression was observed in transgenic founders from 63 BACs (80%). Consistent and reproducible success in BAC transgenesis required the combination of careful DNA purification, the use of polyamine buffer, and sensitive genotyping assays.  相似文献   
19.
Optimum allocation of resources is of fundamental importance for the efficiency of breeding programs. The objectives of our study were to (1) determine the optimum allocation for the number of lines and test locations in hybrid maize breeding with doubled haploids (DHs) regarding two optimization criteria, the selection gain ΔG k and the probability P k of identifying superior genotypes, (2) compare both optimization criteria including their standard deviations (SDs), and (3) investigate the influence of production costs of DHs on the optimum allocation. For different budgets, number of finally selected lines, ratios of variance components, and production costs of DHs, the optimum allocation of test resources under one- and two-stage selection for testcross performance with a given tester was determined by using Monte Carlo simulations. In one-stage selection, lines are tested in field trials in a single year. In two-stage selection, optimum allocation of resources involves evaluation of (1) a large number of lines in a small number of test locations in the first year and (2) a small number of the selected superior lines in a large number of test locations in the second year, thereby maximizing both optimization criteria. Furthermore, to have a realistic chance of identifying a superior genotype, the probability P k of identifying superior genotypes should be greater than 75%. For budgets between 200 and 5,000 field plot equivalents, P k > 75% was reached only for genotypes belonging to the best 5% of the population. As the optimum allocation for P k (5%) was similar to that for ΔG k , the choice of the optimization criterion was not crucial. The production costs of DHs had only a minor effect on the optimum number of locations and on values of the optimization criteria. C. Friedrich H. Longin and H. Friedrich Utz contributed equally to this work.  相似文献   
20.
Abstract

The Gibbs ensemble Monte Carlo simulation has been used to calculate vapour-liquid equilibria of a Lennard-Jones (LJ) binary mixture. The mixture studied is the LB-2-1 model which has been used in our previous calculations on PVT relation and density-dependent local composition. The P-x-y relation has been established at two different temperatures and used to determine vapour-liquid coexistence region in the PVTx space.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号