首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4610篇
  免费   489篇
  国内免费   258篇
  2024年   10篇
  2023年   165篇
  2022年   115篇
  2021年   187篇
  2020年   215篇
  2019年   266篇
  2018年   213篇
  2017年   219篇
  2016年   171篇
  2015年   182篇
  2014年   220篇
  2013年   282篇
  2012年   181篇
  2011年   167篇
  2010年   150篇
  2009年   206篇
  2008年   212篇
  2007年   217篇
  2006年   191篇
  2005年   183篇
  2004年   173篇
  2003年   172篇
  2002年   153篇
  2001年   121篇
  2000年   122篇
  1999年   100篇
  1998年   60篇
  1997年   55篇
  1996年   39篇
  1995年   40篇
  1994年   45篇
  1993年   47篇
  1992年   44篇
  1991年   37篇
  1990年   29篇
  1989年   23篇
  1988年   38篇
  1987年   45篇
  1986年   26篇
  1985年   30篇
  1984年   21篇
  1983年   29篇
  1982年   24篇
  1981年   27篇
  1980年   23篇
  1979年   17篇
  1978年   15篇
  1977年   11篇
  1976年   12篇
  1971年   6篇
排序方式: 共有5357条查询结果,搜索用时 444 毫秒
111.
The analytical scale of most mass‐spectrometry‐based targeted proteomics assays is usually limited by assay performance and instrument utilization. A recently introduced method, called triggered by offset, multiplexed, accurate mass, high resolution, and absolute quantitation (TOMAHAQ), combines both peptide and sample multiplexing to simultaneously improve analytical scale and quantitative performance. In the present work, critical technical requirements and data analysis considerations for successful implementation of the TOMAHAQ technique based on the study of a total of 185 target peptides across over 200 clinical plasma samples are discussed. Importantly, it is observed that significant interference originate from the TMTzero reporter ion used for the synthetic trigger peptides. This interference is not expected because only TMT10plex reporter ions from the target peptides should be observed under typical TOMAHAQ conditions. In order to unlock the great promise of the technique for high throughput quantification, here a post‐acquisition data correction strategy to deconvolute the reporter ion superposition and recover reliable data is proposed.  相似文献   
112.
Microbes play important roles in human health and disease. The interaction between microbes and hosts is a reciprocal relationship, which remains largely under-explored. Current computational resources lack manually and consistently curated data to connect metagenomic data to pathogenic microbes, microbial core genes, and disease phenotypes. We developed the MicroPhenoDB database by manually curating and consistently integrating microbe-disease association data. MicroPhenoDB provides 5677 non-redundant associations between 1781 microbes and 542 human disease phenotypes across more than 22 human body sites. MicroPhenoDB also provides 696,934 relationships between 27,277 unique clade-specific core genes and 685 microbes. Disease phenotypes are classified and described using the Experimental Factor Ontology (EFO). A refined score model was developed to prioritize the associations based on evidential metrics. The sequence search option in MicroPhenoDB enables rapid identification of existing pathogenic microbes in samples without running the usual metagenomic data processing and assembly. MicroPhenoDB offers data browsing, searching, and visualization through user-friendly web interfaces and web service application programming interfaces. MicroPhenoDB is the first database platform to detail the relationships between pathogenic microbes, core genes, and disease phenotypes. It will accelerate metagenomic data analysis and assist studies in decoding microbes related to human diseases. MicroPhenoDB is available through http://www.liwzlab.cn/microphenodb and http://lilab2.sysu.edu.cn/microphenodb.  相似文献   
113.
114.
115.
116.
117.
China's high‐speed economic development and reliance on overconsumption of natural resources have led to serious environmental pollution. Environmental taxation is seen as an effective economic tool to help mitigate air pollution. In order to assess the effects of different scenarios of environmental taxation policies, we propose a frontier‐based environmentally extended input–output optimization model with explicit emission abatement sectors to reflect the inputs and benefits of abatement. Frontier analysis ensures policy scenarios are assessed under the same technical efficiency benchmark, while input–output analysis depicts the wide range of economic transactions among sectors of an economy. Four scenarios are considered in this study, which are increasing specific tax rates of SO2, NOx, and soot and dust separately and increasing all three tax rates simultaneously. Our estimation results show that: raising tax rates of SO2, NOx, and soot and dust simultaneously would have the highest emission reduction effects, with the SO2 tax rate making the greatest contribution to emission reduction. Raising the soot and dust tax rate is the most environmentally friendly strategy due to its highest abatement to welfare through avoided health costs. The combination of frontier analysis and input–output analysis provides policy makers a comprehensive and sectoral approach to assess costs and benefits of environmental taxation.  相似文献   
118.
Life cycle assessment (LCA) and environmentally extended input–output analyses (EEIOA) are two techniques commonly used to assess environmental impacts of an activity/product. Their strengths and weaknesses are complementary, and they are thus regularly combined to obtain hybrid LCAs. A number of approaches in hybrid LCA exist, which leads to different results. One of the differences is the method used to ensure that mixed LCA and EEIOA data do not overlap, which is referred to as correction for double counting. This aspect of hybrid LCA is often ignored in reports of hybrid assessments and no comprehensive study has been carried out on it. This article strives to list, compare, and analyze the different existing methods for the correction of double counting. We first harmonize the definitions of the existing correction methods and express them in a common notation, before introducing a streamlined variant. We then compare their respective assumptions and limitations. We discuss the loss of specific information regarding the studied activity/product and the loss of coherent financial representation caused by some of the correction methods. This analysis clarifies which techniques are most applicable to different tasks, from hybridizing individual LCA processes to integrating complete databases. We finally conclude by giving recommendations for future hybrid analyses.  相似文献   
119.
120.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号