首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   18108篇
  免费   1809篇
  国内免费   2536篇
  2024年   80篇
  2023年   427篇
  2022年   446篇
  2021年   647篇
  2020年   725篇
  2019年   860篇
  2018年   714篇
  2017年   851篇
  2016年   802篇
  2015年   779篇
  2014年   975篇
  2013年   1440篇
  2012年   791篇
  2011年   991篇
  2010年   823篇
  2009年   1052篇
  2008年   1142篇
  2007年   1054篇
  2006年   955篇
  2005年   770篇
  2004年   712篇
  2003年   621篇
  2002年   513篇
  2001年   466篇
  2000年   464篇
  1999年   409篇
  1998年   317篇
  1997年   274篇
  1996年   242篇
  1995年   226篇
  1994年   219篇
  1993年   191篇
  1992年   176篇
  1991年   172篇
  1990年   138篇
  1989年   121篇
  1988年   99篇
  1987年   98篇
  1986年   106篇
  1985年   67篇
  1984年   79篇
  1983年   64篇
  1982年   95篇
  1981年   53篇
  1980年   67篇
  1979年   37篇
  1978年   35篇
  1977年   15篇
  1976年   14篇
  1974年   11篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
231.
Endoplasmic reticulum-associated protein degradation (ERAD) is a stringent quality control mechanism through which misfolded, unassembled and some native proteins are targeted for degradation to maintain appropriate cellular and organelle homeostasis. Several in vitro and in vivo ERAD-related studies have provided mechanistic insights into ERAD pathway activation and its consequent events; however, a majority of these have investigated the effect of ERAD substrates and their consequent diseases affecting the degradation process. In this review, we present all reported human single-gene disorders caused by genetic variation in genes that encode ERAD components rather than their substrates. Additionally, after extensive literature survey, we present various genetically manipulated higher cellular and mammalian animal models that lack specific components involved in various stages of the ERAD pathway.  相似文献   
232.
How the complexity of food webs depends on environmental variables is a long-standing ecological question. It is unclear though how food-chain length should vary with adaptive evolution of the constitutive species. Here we model the evolution of species colonisation rates and its consequences on occupancies and food-chain length in metacommunities. When colonisation rates can evolve, longer food-chains can persist. Extinction, perturbation and habitat loss all affect evolutionarily stable colonisation rates, but the strength of the competition-colonisation trade-off has a major role: weaker trade-offs yield longer chains. Although such eco-evo dynamics partly alleviates the spatial constraint on food-chain length, it is no magic bullet: the highest, most vulnerable, trophic levels are also those that least benefit from evolution. We provide qualitative predictions regarding how trait evolution affects the response of communities to disturbance and habitat loss. This highlights the importance of eco-evolutionary dynamics at metacommunity level in determining food-chain length.  相似文献   
233.
Horizontal biosedimentary gradients across the Sado estuary,W. Portugal   总被引:2,自引:0,他引:2  
The topography of the Sado estuary, the second largest of Portugal, comprises the outer estuary inside the entrance channel and the inner estuary, on the inward side of which begins the tidal mudflats. The outer estuary subtidal area covers approximately 70 km2 and presents a series of longitudinal intertidal sandbanks, separating a northern and a southern channel. A benthic survey was undertaken in the outer estuary during June 1986, in which superficial sediments and macrofauna were sampled at 133 locations. The environmental variables measured in the superficial sediments were the temperature, the granulometric structure, the silt, sand and the gravel content, and the total organic matter content. The primary macrofauna biological variables studied were the species composition, abundance and biomass, calculated on wet, dry and ash-free dry weight. The granulometry and the organic content of superficial sediments agreed with the transient and the residual currents velocity field, simulated in a 2-D hydrodynamic model previously elaborated for the outer estuary. The northern channel superficial sediments showed higher silt and total organic matter content, while the model also suggested lower transient and residual velocities, water flow and shear stress in this channel. The distribution patterns of the subtidal macrofauna were separated into two main groups of species, one comprising taxa essentially settled near the estuarine mouth and the other inwards. Biological primary variables also showed consistent patterns, comparable to other Portuguese estuaries. The major subtidal benthic biotopes were obtained through classification analysis and related to the prevailing hydrophysical and sedimentary conditions in the outer estuary.  相似文献   
234.
A new method has been developed to compute the probability that each amino acid in a protein sequence is in a particular secondary structural element. Each of these probabilities is computed using the entire sequence and a set of predefined structural class models. This set of structural classes is patterned after Jane Richardson''s taxonomy for the domains of globular proteins. For each structural class considered, a mathematical model is constructed to represent constraints on the pattern of secondary structural elements characteristic of that class. These are stochastic models having discrete state spaces (referred to as hidden Markov models by researchers in signal processing and automatic speech recognition). Each model is a mathematical generator of amino acid sequences; the sequence under consideration is modeled as having been generated by one model in the set of candidates. The probability that each model generated the given sequence is computed using a filtering algorithm. The protein is then classified as belonging to the structural class having the most probable model. The secondary structure of the sequence is then analyzed using a "smoothing" algorithm that is optimal for that structural class model. For each residue position in the sequence, the smoother computes the probability that the residue is contained within each of the defined secondary structural elements of the model. This method has two important advantages: (1) the probability of each residue being in each of the modeled secondary structural elements is computed using the totality of the amino acid sequence, and (2) these probabilities are consistent with prior knowledge of realizable domain folds as encoded in each model. As an example of the method''s utility, we present its application to flavodoxin, a prototypical alpha/beta protein having a central beta-sheet, and to thioredoxin, which belongs to a similar structural class but shares no significant sequence similarity.  相似文献   
235.
海南岛热带山地雨林林分生物量估测方法比较分析   总被引:20,自引:3,他引:17  
李意德 《生态学报》1993,13(4):313-320
本文通过对海南岛热带山地雨林林分生物量估测方法的比较分析,表明材积转换法不适宜估算海南岛热带山地雨林林分生物量,其结果与皆伐法相比较一般偏高20%—40%;而用实测资料建立的生物量回归模型,对原始林林分有较好的估测结果,除树枝和树叶生物量外,树干、树皮及地上部分生物量的回归模型值,与皆伐法的结果比较,相对误差一般在±10%以内,为允许误差范围,而对热带山地雨林的更新林生物量的估测则效果较差,应建立相应的估测模型。平均木法有工作量小的优点,且误差也在16%以下,但要注意取样的树种多样性和取样强度,在实际中应当慎用。另外本文对测定热带山地雨林生物量(原始林)的所需面积大小问题作了研究,提出了生物量-面积曲线的概念,确定其最小调查面积为2500m~2以上。  相似文献   
236.
When there is a predictive biomarker, enrichment can focus the clinical trial on a benefiting subpopulation. We describe a two-stage enrichment design, in which the first stage is designed to efficiently estimate a threshold and the second stage is a “phase III-like” trial on the enriched population. The goal of this paper is to explore design issues: sample size in Stages 1 and 2, and re-estimation of the Stage 2 sample size following Stage 1. By treating these as separate trials, we can gain insight into how the predictive nature of the biomarker specifically impacts the sample size. We also show that failure to adequately estimate the threshold can have disastrous consequences in the second stage. While any bivariate model could be used, we assume a continuous outcome and continuous biomarker, described by a bivariate normal model. The correlation coefficient between the outcome and biomarker is the key to understanding the behavior of the design, both for predictive and prognostic biomarkers. Through a series of simulations we illustrate the impact of model misspecification, consequences of poor threshold estimation, and requisite sample sizes that depend on the predictive nature of the biomarker. Such insight should be helpful in understanding and designing enrichment trials.  相似文献   
237.
Leveraging information in aggregate data from external sources to improve estimation efficiency and prediction accuracy with smaller scale studies has drawn a great deal of attention in recent years. Yet, conventional methods often either ignore uncertainty in the external information or fail to account for the heterogeneity between internal and external studies. This article proposes an empirical likelihood-based framework to improve the estimation of the semiparametric transformation models by incorporating information about the t-year subgroup survival probability from external sources. The proposed estimation procedure incorporates an additional likelihood component to account for uncertainty in the external information and employs a density ratio model to characterize population heterogeneity. We establish the consistency and asymptotic normality of the proposed estimator and show that it is more efficient than the conventional pseudopartial likelihood estimator without combining information. Simulation studies show that the proposed estimator yields little bias and outperforms the conventional approach even in the presence of information uncertainty and heterogeneity. The proposed methodologies are illustrated with an analysis of a pancreatic cancer study.  相似文献   
238.
Two-part joint models for a longitudinal semicontinuous biomarker and a terminal event have been recently introduced based on frequentist estimation. The biomarker distribution is decomposed into a probability of positive value and the expected value among positive values. Shared random effects can represent the association structure between the biomarker and the terminal event. The computational burden increases compared to standard joint models with a single regression model for the biomarker. In this context, the frequentist estimation implemented in the R package frailtypack can be challenging for complex models (i.e., a large number of parameters and dimension of the random effects). As an alternative, we propose a Bayesian estimation of two-part joint models based on the Integrated Nested Laplace Approximation (INLA) algorithm to alleviate the computational burden and fit more complex models. Our simulation studies confirm that INLA provides accurate approximation of posterior estimates and to reduced computation time and variability of estimates compared to frailtypack in the situations considered. We contrast the Bayesian and frequentist approaches in the analysis of two randomized cancer clinical trials (GERCOR and PRIME studies), where INLA has a reduced variability for the association between the biomarker and the risk of event. Moreover, the Bayesian approach was able to characterize subgroups of patients associated with different responses to treatment in the PRIME study. Our study suggests that the Bayesian approach using the INLA algorithm enables to fit complex joint models that might be of interest in a wide range of clinical applications.  相似文献   
239.
With big data becoming widely available in healthcare, machine learning algorithms such as random forest (RF) that ignores time-to-event information and random survival forest (RSF) that handles right-censored data are used for individual risk prediction alternatively to the Cox proportional hazards (Cox-PH) model. We aimed to systematically compare RF and RSF with Cox-PH. RSF with three split criteria [log-rank (RSF-LR), log-rank score (RSF-LRS), maximally selected rank statistics (RSF-MSR)]; RF, Cox-PH, and Cox-PH with splines (Cox-S) were evaluated through a simulation study based on real data. One hundred eighty scenarios were investigated assuming different associations between the predictors and the outcome (linear/linear and interactions/nonlinear/nonlinear and interactions), training sample sizes (500/1000/5000), censoring rates (50%/75%/93%), hazard functions (increasing/decreasing/constant), and number of predictors (seven, 15 including noise variables). Methods' performance was evaluated with time-dependent area under curve and integrated Brier score. In all scenarios, RF had the worst performance. In scenarios with a low number of events (⩽70), Cox-PH was at least noninferior to RSF, whereas under linearity assumption it outperformed RSF. Under the presence of interactions, RSF performed better than Cox-PH as the number of events increased whereas Cox-S reached at least similar performance with RSF under nonlinear effects. RSF-LRS performed slightly worse than RSF-LR and RSF-MSR when including noise variables and interaction effects. When applied to real data, models incorporating survival time performed better. Although RSF algorithms are a promising alternative to conventional Cox-PH as data complexity increases, they require a higher number of events for training. In time-to-event analysis, algorithms that consider survival time should be used.  相似文献   
240.
We study bias-reduced estimators of exponentially transformed parameters in general linear models (GLMs) and show how they can be used to obtain bias-reduced conditional (or unconditional) odds ratios in matched case-control studies. Two options are considered and compared: the explicit approach and the implicit approach. The implicit approach is based on the modified score function where bias-reduced estimates are obtained by using iterative procedures to solve the modified score equations. The explicit approach is shown to be a one-step approximation of this iterative procedure. To apply these approaches for the conditional analysis of matched case-control studies, with potentially unmatched confounding and with several exposures, we utilize the relation between the conditional likelihood and the likelihood of the unconditional logit binomial GLM for matched pairs and Cox partial likelihood for matched sets with appropriately setup data. The properties of the estimators are evaluated by using a large Monte Carlo simulation study and an illustration of a real dataset is shown. Researchers reporting the results on the exponentiated scale should use bias-reduced estimators since otherwise the effects can be under or overestimated, where the magnitude of the bias is especially large in studies with smaller sample sizes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号