首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6110篇
  免费   606篇
  国内免费   347篇
  7063篇
  2024年   25篇
  2023年   209篇
  2022年   158篇
  2021年   236篇
  2020年   257篇
  2019年   318篇
  2018年   265篇
  2017年   258篇
  2016年   236篇
  2015年   238篇
  2014年   374篇
  2013年   412篇
  2012年   301篇
  2011年   253篇
  2010年   223篇
  2009年   293篇
  2008年   282篇
  2007年   284篇
  2006年   245篇
  2005年   234篇
  2004年   237篇
  2003年   208篇
  2002年   184篇
  2001年   159篇
  2000年   149篇
  1999年   123篇
  1998年   79篇
  1997年   69篇
  1996年   51篇
  1995年   50篇
  1994年   54篇
  1993年   50篇
  1992年   48篇
  1991年   51篇
  1990年   38篇
  1989年   31篇
  1988年   40篇
  1987年   46篇
  1986年   24篇
  1985年   37篇
  1984年   28篇
  1983年   34篇
  1982年   30篇
  1981年   31篇
  1980年   25篇
  1979年   17篇
  1978年   14篇
  1977年   14篇
  1976年   12篇
  1975年   8篇
排序方式: 共有7063条查询结果,搜索用时 0 毫秒
161.
The rapid expansion of methods for measuring biological data ranging from DNA sequence variations to mRNA expression and protein abundance presents the opportunity to utilize multiple types of information jointly in the study of human health and disease. Organisms are complex systems that integrate inputs at myriad levels to arrive at an observable phenotype. Therefore, it is essential that questions concerning the etiology of phenotypes as complex as common human diseases take the systemic nature of biology into account, and integrate the information provided by each data type in a manner analogous to the operation of the body itself. While limited in scope, the initial forays into the joint analysis of multiple data types have yielded interesting results that would not have been reached had only one type of data been considered. These early successes, along with the aforementioned theoretical appeal of data integration, provide impetus for the development of methods for the parallel, high-throughput analysis of multiple data types. The idea that the integrated analysis of multiple data types will improve the identification of biomarkers of clinical endpoints, such as disease susceptibility, is presented as a working hypothesis.  相似文献   
162.
Abstract

A series of lipophilic ester derivatives (2ag) of (S)-1-(pent-4-enoyl)-4-(hydroxymethyl)-azetidin-2-one has been synthesised in three steps from (S)-4-(benzyloxycarbonyl)-azetidin-2-one and evaluated as novel, reversible, β-lactamic inhibitors of endocannabinoid-degrading enzymes (human fatty acid amide hydrolase (hFAAH) and monoacylglycerol lipase (hMAGL)). The compounds showed IC50 values in the micromolar range and selectivity for hFAAH versus hMAGL. The unexpected 1000-fold decrease in activity of 2a comparatively to the known regioisomeric structure 1a (i.e. lipophilic chains placed on N1 and C3 positions of the β-lactam core) could be explained on the basis of docking studies into a revisited model of hFAAH active site, considering one or two water molecules in interaction with the catalytic triad.  相似文献   
163.
Summary

Pheromones can be used as attractants for the opposite sex in many environments; however, little is known about the search strategies employed in responding to pheromones in the marine environment. The spawning behavior of males of the polychaete Nereis succinea is known to be triggered at close range by a high concentration (>~10?7 M) of pheromone, cysteine glutathione disulfide (CSSG), released by females. Since CSSG also causes acceleration of swimming and increased turning, in addition to eliciting ejaculation, we proposed the hypothesis that these behaviors elicited by low concentrations of pheromone can be used by males to find females. The current study develops a computer simulation model of male and female N. succinea behavior for testing whether male responses to low concentrations of CSSG can facilitate finding females. Video recording of female swimming behavior in the field showed spontaneous loops, spirals, and circles that have been incorporated into the model. The scientific workflow paradigm within which the computer model has been developed also incorporates a data provenance system to enable systematic replay and testing of responses to individual parameters. Output of the model shows complex turning behavior leading to successful mating encounters at concentrations as low as 3×10?9 M CSSG. Behavior resembling the output of the model was recorded in field observations. Application of the model in the future will be used to determine what pheromone concentrations produce significant increases in the probability of mating encounters.  相似文献   
164.
165.
An algorithm of automatic classification is proposed and applied to a large collection of perennial ryegrass wild populations from France. This method is based on an ascendant hierarchical clustering using the Euclidian distance from the principal components extracted from the variance-covariance matrix between 28 agronomic traits. A contiguity constraint is imposed: only those pairs of populations which are defined as contiguous are grouped together into a cluster. The definition of contiguity is based on a geostatistical parameter: the range of the variogramme, i.e. the largest distance above which the variance between pairs of population no longer increases. This method yields clusters that are generally more compact than those obtained without constraint. In most cases the contours of these clusters fit well with known ecogeographic regions, namely, for macroclimatic homogeneous conditions. This suggests that selective factors exert a major influence in the genetic differentiation of ryegrass populations for quantitatively inherited adaptive traits. It is proposed that such a method could provide useful genetic and ecogeographic bases for sampling a core collection in widespread wild species such as forage grasses.Institut National de la Recherche Agrononique  相似文献   
166.
Summary Case–cohort sampling is a commonly used and efficient method for studying large cohorts. Most existing methods of analysis for case–cohort data have concerned the analysis of univariate failure time data. However, clustered failure time data are commonly encountered in public health studies. For example, patients treated at the same center are unlikely to be independent. In this article, we consider methods based on estimating equations for case–cohort designs for clustered failure time data. We assume a marginal hazards model, with a common baseline hazard and common regression coefficient across clusters. The proposed estimators of the regression parameter and cumulative baseline hazard are shown to be consistent and asymptotically normal, and consistent estimators of the asymptotic covariance matrices are derived. The regression parameter estimator is easily computed using any standard Cox regression software that allows for offset terms. The proposed estimators are investigated in simulation studies, and demonstrated empirically to have increased efficiency relative to some existing methods. The proposed methods are applied to a study of mortality among Canadian dialysis patients.  相似文献   
167.
Data availability and data quality are still critical factors for successful LCA work. The SETAC-Europe LCA Working Group ‘Data Availability and Data Quality’ has therefore focused on ongoing developments toward a common data exchange format, public databases and accepted quality measures to find science-based solutions than can be widely accepted. A necessary prerequisite for the free flow and exchange of life cycle inventory (LCI) data and the comparability of LCIs is the consistent definition, nomenclature, and use of inventory parameters. This is the main subject of the subgroup ‘Recommended List of Exchanges’ that presents its results and findings here:
•  Rigid parameter lists for LCIs are not practical; especially, compulsory lists of measurements for all inventories are counterproductive. Instead, practitioners should be obliged to give the rationale for their scientific choice of selected and omitted parameters. The standardized (not: mandatory!) parameter list established by the subgroup can help to facilitate this.
•  The standardized nomenclature of LCI parameters and the standardized list of measurement bases (units) for these parameters need not be appliedinternally (e.g. in LCA software), but should be adhered to inexternal communications (data for publication and exchange). Deviations need to be clearly stated.
•  Sum parameters may or may not overlap - misinterpretations in either direction introduce a bias of unknown significance in the subsequent life cycle impact assessments (LCIA). The only person who can discriminate unambiguously is the practitioner who measures or calculates such values. Therefore, a clear statement of independence or overlap is necessary for every sum parameter reported.
•  Sum parameters should be only used when the group of emissions as such is measured. Individually measured emission parameters should not be hidden in group or sum parameters.
•  Problematic substances (such as carcinogens, ozone depleting agents and the like) maynever be obscured in group emissions (together with less harmful substances or with substances of different environmental impact), butmust be determined and reported individually, as mentioned in paragraph 3.3 of this article.
•  Mass and energy balances should be carried out on a unit process level. Mass balances should be done on the level of the entire mass flow in a process as well as on the level of individual chemical elements.
•  Whenever possible, practitioners should try to fill data gaps with their knowledge of analogous processes, environmental expert judgements, mass balance calculations, worst case assumptions or similar estimation procedures.
  相似文献   
168.
Qualitative validation consists in showing that a model is able to mimic available observed data. In population level biological models, the available data frequently represent a group status, such as pool testing, rather than the individual statuses. They are aggregated. Our objective was to explore an approach for qualitative validation of a model with aggregated data and to apply it to validate a stochastic model simulating the bovine viral-diarrhoea virus (BVDV) spread within a dairy cattle herd. Repeated measures of the level of BVDV-specific antibodies in the bulk-tank milk (total milk production of a herd) were used to summarise the BVDV herd status. First, a domain of validation was defined to ensure a comparison restricted to dynamics of pathogen spread well identified among observed aggregated data (new herd infection with a wide BVDV spread). For simulations, scenarios were defined and simulation outputs at the individual animal level were aggregated at the herd level using an aggregation function. Comparison was done only for observed data and simulated aggregated outputs that were in the domain of validation. The validity of our BVDV model was not rejected. Drawbacks and ways of improvement of the approach are discussed.  相似文献   
169.
  1. Download : Download high-res image (162KB)
  2. Download : Download full-size image
Highlights
  • •Multiplex epitope mapping/antigenic determinant identification in the gas phase.
  • •Intact transition and controlled dissociation of immune complexes by MS.
  • •Simultaneous identification and amino acid sequence determination of epitopes.
  • •Simplified in-solution sample handling because of ion manipulation and filtering by MS.
  相似文献   
170.
EPA Method 1615 was developed with a goal of providing a standard method for measuring enteroviruses and noroviruses in environmental and drinking waters. The standardized sampling component of the method concentrates viruses that may be present in water by passage of a minimum specified volume of water through an electropositive cartridge filter. The minimum specified volumes for surface and finished/ground water are 300 L and 1,500 L, respectively. A major method limitation is the tendency for the filters to clog before meeting the sample volume requirement. Studies using two different, but equivalent, cartridge filter options showed that filter clogging was a problem with 10% of the samples with one of the filter types compared to 6% with the other filter type. Clogging tends to increase with turbidity, but cannot be predicted based on turbidity measurements only. From a cost standpoint one of the filter options is preferable over the other, but the water quality and experience with the water system to be sampled should be taken into consideration in making filter selections.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号