首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2058篇
  免费   171篇
  国内免费   412篇
  2024年   4篇
  2023年   64篇
  2022年   86篇
  2021年   81篇
  2020年   109篇
  2019年   121篇
  2018年   108篇
  2017年   112篇
  2016年   144篇
  2015年   96篇
  2014年   114篇
  2013年   177篇
  2012年   123篇
  2011年   140篇
  2010年   105篇
  2009年   139篇
  2008年   137篇
  2007年   117篇
  2006年   94篇
  2005年   96篇
  2004年   77篇
  2003年   56篇
  2002年   50篇
  2001年   53篇
  2000年   32篇
  1999年   25篇
  1998年   28篇
  1997年   17篇
  1996年   17篇
  1995年   17篇
  1994年   13篇
  1993年   18篇
  1992年   11篇
  1991年   14篇
  1990年   5篇
  1989年   4篇
  1988年   4篇
  1987年   2篇
  1986年   5篇
  1985年   6篇
  1984年   1篇
  1983年   4篇
  1982年   8篇
  1981年   2篇
  1980年   2篇
  1979年   2篇
  1976年   1篇
排序方式: 共有2641条查询结果,搜索用时 15 毫秒
81.
密码子优化提高aiiaB546毕赤酵母表达活性   总被引:1,自引:0,他引:1  
N-酰基高丝氨酸内酯酶是一类特异性降解N-酰基高丝氨酸内酯类信号分子(AHLs)的蛋白水解酶,通过水解AHLs生成酰基高丝氨酸,使AHLs失去活性,从而阻断病原菌的群体感应路径,使病原菌失去致病能力,其广泛存在于多种微生物中[1,2]。近年来N-酰基高丝氨酸内酯酶作为一种新型抗菌策略(群体感应淬灭策略)的工具酶而成为水产养殖防治细菌性疾病研究的热点[3—5]。  相似文献   
82.
不同林龄胡桃楸林下植物多样性的差异   总被引:4,自引:0,他引:4  
以黑龙江省帽儿山地区不同林龄胡桃楸(Juglans mandshurica)人工林为对象,在考虑化感物质影响的基础上,研究了不同林龄林分植物多样性的差异.结果表明:随着胡桃楸林龄的增加,林下灌木丰富度指数(IMa)、多样性指数(Isw)及Pielou均匀度指数(J)均呈现递增趋势;林下草本,除均匀度指数外,其他2个指数随着林龄的增长呈递减趋势;随着胡桃楸年龄的增加,草本种类由14种逐渐减少到10种;16年生的胡桃楸林分重要值较大的物种有蔷薇科的山楂叶悬钩子,菊科的一年蓬、蒲公英和蔷薇科的蛇莓委陵菜;23年生胡桃楸林分重要值较大的灌木物种有榆科的春榆和木犀科的暴马丁香,草本植物有禾本科的龙常草和蕨类植物;51年生胡桃楸林分中重要值较大的灌木物种为木犀科的暴马丁香,重要值较大的草本物种为木贼科的木贼和紫草科的山茄子;胡桃楸林植物多样性受胡桃醌的影响较小,土壤有效磷、速效钾对灌木的多样性指数影响较大;灌木与草本植物对pH值的适应范围明显不同;其他土壤指标,如容重、含水率、有机质、全氮等对灌木及草本多样性指数的影响均表现出相反的作用.  相似文献   
83.
84.
The storage of red blood cells (RBCs) in a refrigerated state allows a shelf life of a few weeks, whereas RBCs frozen in 40% glycerol have a shelf life of 10 years. Despite the clear logistical advantages of frozen blood, it is not widely used in transfusion medicine. One of the main reasons is that existing post‐thaw washing methods to remove glycerol are prohibitively time consuming, requiring about an hour to remove glycerol from a single unit of blood. In this study, we have investigated the potential for more rapid removal of glycerol. Using published biophysical data for human RBCs, we mathematically optimized a three‐step deglycerolization process, yielding a procedure that was less than 32 s long. This procedure was found to yield 70% hemolysis, a value that was much higher than expected. Consequently, we systematically evaluated three‐step deglycerolization procedures, varying the solution composition and equilibration time in each step. Our best results consisted of less than 20% hemolysis for a deglycerolization time of 3 min, and it is expected that even further improvements could be made with a more thorough optimization and more reliable biophysical data. Our results demonstrate the potential for significantly reducing the deglycerolization time compared with existing methods. © 2013 American Institute of Chemical Engineers Biotechnol. Prog., 29:609–620, 2013  相似文献   
85.
86.
Transfer of Natural Micro Structures to Bionic Lightweight Design Proposals   总被引:1,自引:0,他引:1  
The abstraction of complex biological lightweight structure features into a producible technical component is a funda- mental step within the transfer of design principles from nature to technical lightweight solutions. A major obstacle for the transfer of natural lightweight structures to technical solutions is their peculiar geometry. Since natural lightweight structures possess irregularities and often have extremely complex forms due to elaborate growth processes, it is usually necessary to simplify their design principles. This step of simplification/abstraction has been used in different biomimetic methods, but so far, it has an arbitrary component, i.e. it crucially depends on the competence of the person who executes the abstraction. This paper describes a new method for abstraction and specialization of natural micro structures for technical lightweight compo- nents. The new method generates stable lightweight design principles by using topology optimization within a design space of preselected biological archetypes such as diatoms or radiolarian. The resulting solutions are adapted to the technical load cases and production processes, can be created in a large variety, and may be further optimized e.g. by using parametric optimization.  相似文献   
87.
88.
In many research disciplines, hypothesis tests are applied to evaluate whether findings are statistically significant or could be explained by chance. The Wilcoxon–Mann–Whitney(WMW) test is among the most popular hypothesis tests in medicine and life science to analyze if two groups of samples are equally distributed. This nonparametric statistical homogeneity test is commonly applied in molecular diagnosis. Generally, the solution of the WMW test takes a high combinatorial effort for large sample cohorts containing a significant number of ties. Hence, P value is frequently approximated by a normal distribution. We developed EDISON-WMW, a new approach to calculate the exact permutation of the two-tailed unpaired WMW test without any corrections required and allowing for ties. The method relies on dynamic programing to solve the combinatorial problem of the WMW test efficiently. Beyond a straightforward implementation of the algorithm, we presented different optimization strategies and developed a parallel solution. Using our program,the exact P value for large cohorts containing more than 1000 samples with ties can be calculated within minutes. We demonstrate the performance of this novel approach on randomly-generated data, benchmark it against 13 other commonly-applied approaches and moreover evaluate molecular biomarkers for lung carcinoma and chronic obstructive pulmonary disease(COPD). We foundthat approximated P values were generally higher than the exact solution provided by EDISONWMW. Importantly, the algorithm can also be applied to high-throughput omics datasets, where hundreds or thousands of features are included. To provide easy access to the multi-threaded version of EDISON-WMW, a web-based solution of our algorithm is freely available at http://www.ccb.uni-saarland.de/software/wtest/.  相似文献   
89.
Multi‐column capture processes show several advantages compared to batch capture. It is however not evident how many columns one should use exactly. To investigate this issue, twin‐column CaptureSMB, 3‐ and 4‐column periodic counter‐current chromatography (PCC) and single column batch capture are numerically optimized and compared in terms of process performance for capturing a monoclonal antibody using protein A chromatography. Optimization is carried out with respect to productivity and capacity utilization (amount of product loaded per cycle compared to the maximum amount possible), while keeping yield and purity constant. For a wide range of process parameters, all three multi‐column processes show similar maximum capacity utilization and performed significantly better than batch. When maximizing productivity, the CaptureSMB process shows optimal performance, except at high feed titers, where batch chromatography can reach higher productivity values than the multi‐column processes due to the complete decoupling of the loading and elution steps, albeit at a large cost in terms of capacity utilization. In terms of trade‐off, i.e. how much the capacity utilization decreases with increasing productivity, CaptureSMB is optimal for low and high feed titers, whereas the 3‐column process is optimal in an intermediate region. Using these findings, the most suitable process can be chosen for different production scenarios.  相似文献   
90.
Extreme learning machine (ELM) is a novel and fast learning method to train single layer feed-forward networks. However due to the demand for larger number of hidden neurons, the prediction speed of ELM is not fast enough. An evolutionary based ELM with differential evolution (DE) has been proposed to reduce the prediction time of original ELM. But it may still get stuck at local optima. In this paper, a novel algorithm hybridizing DE and metaheuristic coral reef optimization (CRO), which is called differential evolution coral reef optimization (DECRO), is proposed to balance the explorative power and exploitive power to reach better performance. The thought and the implement of DECRO algorithm are discussed in this article with detail. DE, CRO and DECRO are applied to ELM training respectively. Experimental results show that DECRO-ELM can reduce the prediction time of original ELM, and obtain better performance for training ELM than both DE and CRO.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号