首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6634篇
  免费   791篇
  国内免费   334篇
  2024年   16篇
  2023年   179篇
  2022年   129篇
  2021年   188篇
  2020年   270篇
  2019年   388篇
  2018年   319篇
  2017年   288篇
  2016年   292篇
  2015年   302篇
  2014年   425篇
  2013年   497篇
  2012年   283篇
  2011年   347篇
  2010年   293篇
  2009年   330篇
  2008年   377篇
  2007年   365篇
  2006年   342篇
  2005年   279篇
  2004年   273篇
  2003年   243篇
  2002年   176篇
  2001年   139篇
  2000年   102篇
  1999年   108篇
  1998年   94篇
  1997年   73篇
  1996年   68篇
  1995年   66篇
  1994年   62篇
  1993年   47篇
  1992年   52篇
  1991年   31篇
  1990年   21篇
  1989年   27篇
  1988年   24篇
  1987年   30篇
  1986年   26篇
  1985年   21篇
  1984年   14篇
  1983年   11篇
  1982年   21篇
  1981年   10篇
  1980年   19篇
  1979年   13篇
  1978年   16篇
  1977年   15篇
  1976年   11篇
  1975年   10篇
排序方式: 共有7759条查询结果,搜索用时 15 毫秒
11.
Sensitive biological measures of river ecosystem quality are needed to assess, maintain or restore ecological conditions of water bodies. Since our understanding of these complex systems is imperfect, decision-making requires recognizing uncertainty. In this study, a new predictive multi-metric index based on fish functional traits was developed to assess French rivers. Information on fish assemblage structure, local environment and human-induced disturbances of 1654 French river sites was compiled. A Bayesian framework was used to predict theoretical metric values in absence of human pressure and to estimate the uncertainty associated with these predictions. The uncertainty associated with the index score gives the confidence associated with the evaluation of site ecological conditions.Among the 228 potential metrics tested, only 11 were retained for the index computation. The final index is independent from natural variability and sensitive to human-induced disturbances. In particular, it is affected by the accumulation of different degradations and specific degradations including hydrological perturbations. Predictive uncertainty is globally lower for IPR+ than for underlying metrics.This new methodology seems appropriate to develop bio-indication tools accounting for uncertainty related to reference condition definition and could be extended to other biological groups and areas. Our results support the use of multi-metric indexes to assess rivers and strengthen the idea that examination of uncertainty could contribute greatly to the improvement of the assessment power of bio-indicators.  相似文献   
12.
N-stable isotope analysis of macroalgae has become a popular method for the monitoring of nitrogen pollution in aquatic ecosystems. Basing on changes in their δ15N, macroalgae have been successfully used as biological traps to intercept nitrogen inputs. As different nitrogen sources differ in their isotopic signature, this technique provides useful information on the origin of pollutants and their extension in the water body. However, isotopic fractionation potentially resulting from microbial nitrogen processing, and indirect isotopic variations due to effects of physicochemical conditions on algal nutrient uptake and metabolism, may affect anthropogenic N isotopic values during transportation and assimilation. This in turn can affect the observed isotopic signature in the algal tissue, inducing isotopic variations not related to the origin of assimilated nitrogen, representing a “background noise” in isotope-based water pollution studies.In this study, we focused on three neighbouring coastal lakes (Caprolace, Fogliano and Sabaudia lakes) located south of Rome (Italy). Lakes were characterized by differences in terms of anthropogenic pressure (i.e. urbanization, cultivated crops, livestock grazing) and potential “background noise” levels (i.e. nutrient concentration, pH, microbial concentration). Our aim was to assess nitrogen isotopic variations in fragments of Ulva lactuca specimens after 48 h of submersion to identify and locate the origins of nitrogen pollutants affecting each lake. δ15N were obtained for replicated specimens of U. lactuca spatially distributed to cover the entire surface of each lake, previously collected from a benchmark, unpolluted site. In order to reduce the environmental background noise on isotopic observations, a Bayesian hierarchical model relating isotopic variation to environmental covariates and random spatial effects was used to describe and understand the distribution of isotopic signals in each lake.Our procedure (i) allowed to remove background noise and confounding effects from the observed isotopic signals; (ii) allowed to detect “hidden” pollution sources that would not be detected when not accounting for the confounding effect of environmental background noise; (iii) produced maps of the three lakes providing a clear representation of the isotopic signal variation even where background noise was high. Maps were useful to locate nitrogen pollution sources, identify the origin of the dissolved nitrogen and quantify the extent of pollutants, showing localized organic pollution impacting Sabaudia and Fogliano, but not Caprolace. This method provided a clear characterization of both intra- and inter-lake anthropogenic pressure gradients, representing a powerful approach to the ecological indication and nitrogen pollution management in complex systems, as transitional waterbodies are.  相似文献   
13.
R. P. Novitzki 《Plant Ecology》1995,118(1-2):171-184
The U.S. Environmental Protection Agency (EPA) initiated the Environmental Monitoring and Assessment Program (EMAP) in 1988. The wetland component (EMAP-Wetlands) is designed to provide quantitative assessments of the current status and long-term trends in the ecological condition of wetland resources. EMAP-Wetlands will develop a wetland monitoring network and will identify and evaluate indicators that describe and quantify wetland condition. The EMAP-Wetlands network will represent a probability sample of the total wetland resource. The EMAP sample is based on a triangular grid of approximately 12,600 sample points in the conterminous U.S. The triangular grid adequately samples wetland resources that are common and uniformly distributed in a region, such as the prairie pothole wetlands of the Midwest. However, the design is flexible and allows the base grid density to be increased to adequately sample wetland resources, such as the coastal wetlands of the Gulf of Mexico, which are distributed linearly along the coast. The Gulf sample network required a 49-fold increase in base grid density. EMAP-Wetlands aggregates the 56 U.S. Fish and Wildlife Service's (FWS) National Wetland Inventory (NWI) categories (Cowardin et al. 1979) into 12 functionally similar groups (Leibowitz et al. 1991). Both the EMAP sample design and aggregated wetland classes are suitable for global inventory and assessment of wetlands.The research described in this report has been funded by the U.S. Environmental Protection Agency. This document has been prepared at the EPA Environmental Research Laboratory in Corvallis, OR, through contract No. 68-C8-0006 to Man Tech Environmental Technology, Inc. This paper has been subjected to the Agency's peer and administrative review and approved for publication. Mention of trade names or commercial products does not constitute endorsement or recommendation for use.  相似文献   
14.
In a typical comparative clinical trial the randomization scheme is fixed at the beginning of the study, and maintained throughout the course of the trial. A number of researchers have championed a randomized trial design referred to as ‘outcome‐adaptive randomization.’ In this type of trial, the likelihood of a patient being enrolled to a particular arm of the study increases or decreases as preliminary information becomes available suggesting that treatment may be superior or inferior. While the design merits of outcome‐adaptive trials have been debated, little attention has been paid to significant ethical concerns that arise in the conduct of such studies. These include loss of equipoise, lack of processes for adequate informed consent, and inequalities inherent in the research design which could lead to perceptions of injustice that may have negative implications for patients and the research enterprise. This article examines the ethical difficulties inherent in outcome‐adaptive trials.  相似文献   
15.
Conidiobolus thromboides is an entomophthoralean fungus with potential as a biological control agent of aphids. However, its application in biological control is limited due to its formulation requirements. The objective of this study was to develop and optimise a novel air-extrusion method to embed C. thromboides hyphae at high density in alginate pellets. An orthogonal experimental design was used to investigate selected combinations of parameters known to affect hyphal density within pellets. The diameter of pellets produced, and the calculated density of hyphae within them, ranged from 0.18 ± 0.09 to 3.17 ± 0.06 mm and from 0.02 to 350.56 mg/mm3 respectively. These data were used to predict the optimal parameter combination to deliver the greatest density of hyphae of C. thromboides per pellet: 1% sodium alginate, a 1:2 ratio of hyphae to sodium alginate, an orifice diameter of 0.232 mm and an air pressure of 0.05 MPa. Pellets made under the optimal conditions predicted produced a mean total of 4.3 ± 0.6 × 105 conidia per pellet at 100% relative humidity which was significantly greater than the mean total number of conidia produced from infected aphid cadavers of comparable size (9.35 ± 0.85 × 104) (p < 0.001). In conclusion, air-extrusion embedding appears to be a promising method for formulating in vitro-produced hyphae of C. thromboides for use in biological control.  相似文献   
16.
Accounting for historical demographic features, such as the strength and timing of gene flow and divergence times between closely related lineages, is vital for many inferences in evolutionary biology. Approximate Bayesian computation (ABC) is one method commonly used to estimate demographic parameters. However, the DNA sequences used as input for this method, often microsatellites or RADseq loci, usually represent a small fraction of the genome. Whole genome sequencing (WGS) data, on the other hand, have been used less often with ABC, and questions remain about the potential benefit of, and how to best implement, this type of data; we used pseudo‐observed data sets to explore such questions. Specifically, we addressed the potential improvements in parameter estimation accuracy that could be associated with WGS data in multiple contexts; namely, we quantified the effects of (a) more data, (b) haplotype‐based summary statistics, and (c) locus length. Compared with a hypothetical RADseq data set with 2.5 Mbp of data, using a 1 Gbp data set consisting of 100 Kbp sequences led to substantial gains in the accuracy of parameter estimates, which was mostly due to haplotype statistics and increased data. We also quantified the effects of including (a) locus‐specific recombination rates, and (b) background selection information in ABC analyses. Importantly, assuming uniform recombination or ignoring background selection had a negative effect on accuracy in many cases. Software and results from this method validation study should be useful for future demographic history analyses.  相似文献   
17.
A number of methods of construction of partially balanced incomplete block designs with nested rows and columns are developed and new balanced incomplete block designs with nested rows and columns are obtained as a by-product.  相似文献   
18.
A computer algorithm, CLIX, capable of searching a crystallographic data-base of small molecules for candidates which have both steric and chemical likelihood of binding a protein of known three-dimensional structure is presented. The algorithm is a significant advance over previous strategies which consider solely steric or chemical requirements for binding. The algorithm is shown to be capable of predicting the correct binding geometry of sialic acid to a mutant influenza-virus hemagglutinin and of proposing a number of potential new ligands to this protein.  相似文献   
19.
20.
Until recently, the most common parametric approaches to study the combined effects of several genetic polymorphisms located within a gene or in a small genomic region are, at the genotype level, logistic regressions and at the haplotype level, haplotype analyses. An alternative modeling approach, based on the case/control principle, is to regard exposures (e.g., genetic data such as derived from Single Nucleotide Polymorphisms – SNPs) as random and disease status as fixed and to use a marginal multivariate model that accounts for inter‐relationships between exposures. One such model is the multivariate Dale model. This model is based on multiple logistic regressions. That is why the model, applied in a case/control setting, leads to straightforward interpretations that are similar to those drawn in a classical logistic modeling framework. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号