首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5282篇
  免费   254篇
  国内免费   103篇
  5639篇
  2025年   20篇
  2024年   113篇
  2023年   176篇
  2022年   123篇
  2021年   186篇
  2020年   225篇
  2019年   269篇
  2018年   215篇
  2017年   211篇
  2016年   179篇
  2015年   178篇
  2014年   224篇
  2013年   293篇
  2012年   180篇
  2011年   163篇
  2010年   156篇
  2009年   207篇
  2008年   216篇
  2007年   224篇
  2006年   199篇
  2005年   183篇
  2004年   181篇
  2003年   175篇
  2002年   153篇
  2001年   134篇
  2000年   129篇
  1999年   107篇
  1998年   62篇
  1997年   62篇
  1996年   43篇
  1995年   40篇
  1994年   47篇
  1993年   43篇
  1992年   47篇
  1991年   42篇
  1990年   32篇
  1989年   26篇
  1988年   40篇
  1987年   51篇
  1986年   26篇
  1985年   41篇
  1984年   23篇
  1983年   30篇
  1982年   27篇
  1981年   29篇
  1980年   22篇
  1979年   16篇
  1978年   16篇
  1977年   11篇
  1976年   13篇
排序方式: 共有5639条查询结果,搜索用时 15 毫秒
71.
72.
    
In this paper, a Bayesian method for inference is developed for the zero‐modified Poisson (ZMP) regression model. This model is very flexible for analyzing count data without requiring any information about inflation or deflation of zeros in the sample. A general class of prior densities based on an information matrix is considered for the model parameters. A sensitivity study to detect influential cases that can change the results is performed based on the Kullback–Leibler divergence. Simulation studies are presented in order to illustrate the performance of the developed methodology. Two real datasets on leptospirosis notification in Bahia State (Brazil) are analyzed using the proposed methodology for the ZMP model.  相似文献   
73.
    
One of the main goals in spatial epidemiology is to study the geographical pattern of disease risks. For such purpose, the convolution model composed of correlated and uncorrelated components is often used. However, one of the two components could be predominant in some regions. To investigate the predominance of the correlated or uncorrelated component for multiple scale data, we propose four different spatial mixture multiscale models by mixing spatially varying probability weights of correlated (CH) and uncorrelated heterogeneities (UH). The first model assumes that there is no linkage between the different scales and, hence, we consider independent mixture convolution models at each scale. The second model introduces linkage between finer and coarser scales via a shared uncorrelated component of the mixture convolution model. The third model is similar to the second model but the linkage between the scales is introduced through the correlated component. Finally, the fourth model accommodates for a scale effect by sharing both CH and UH simultaneously. We applied these models to real and simulated data, and found that the fourth model is the best model followed by the second model.  相似文献   
74.
    
A method is proposed that aims at identifying clusters of individuals that show similar patterns when observed repeatedly. We consider linear‐mixed models that are widely used for the modeling of longitudinal data. In contrast to the classical assumption of a normal distribution for the random effects a finite mixture of normal distributions is assumed. Typically, the number of mixture components is unknown and has to be chosen, ideally by data driven tools. For this purpose, an EM algorithm‐based approach is considered that uses a penalized normal mixture as random effects distribution. The penalty term shrinks the pairwise distances of cluster centers based on the group lasso and the fused lasso method. The effect is that individuals with similar time trends are merged into the same cluster. The strength of regularization is determined by one penalization parameter. For finding the optimal penalization parameter a new model choice criterion is proposed.  相似文献   
75.
76.
Numerous Bayesian methods of phenotype prediction and genomic breeding value estimation based on multilocus association models have been proposed. Computationally the methods have been based either on Markov chain Monte Carlo or on faster maximum a posteriori estimation. The demand for more accurate and more efficient estimation has led to the rapid emergence of workable methods, unfortunately at the expense of well-defined principles for Bayesian model building. In this article we go back to the basics and build a Bayesian multilocus association model for quantitative and binary traits with carefully defined hierarchical parameterization of Student's t and Laplace priors. In this treatment we consider alternative model structures, using indicator variables and polygenic terms. We make the most of the conjugate analysis, enabled by the hierarchical formulation of the prior densities, by deriving the fully conditional posterior densities of the parameters and using the acquired known distributions in building fast generalized expectation-maximization estimation algorithms.  相似文献   
77.
    
Random effects models are widely used in population pharmacokinetics and dose-finding studies. However, when more than one observation is taken per patient, the presence of correlated observations (due to shared random effects and possibly residual serial correlation) usually makes the explicit determination of optimal designs difficult. In this article, we introduce a class of multiplicative algorithms to be able to handle correlated data and thus allow numerical calculation of optimal experimental designs in such situations. In particular, we demonstrate its application in a concrete example of a crossover dose-finding trial, as well as in a typical population pharmacokinetics example. Additionally, we derive a lower bound for the efficiency of any given design in this context, which allows us on the one hand to monitor the progress of the algorithm, and on the other hand to investigate the efficiency of a given design without knowing the optimal one. Finally, we extend the methodology such that it can be used to determine optimal designs if there exist some requirements regarding the minimal number of treatments for several (in some cases all) experimental conditions.  相似文献   
78.
We performed a synthetic analysis of Harvard Forest net ecosystem exchange of CO2 (NEE) time series and a simple ecosystem carbon flux model, the simplified Photosynthesis and Evapo‐Transpiration model (SIPNET). SIPNET runs at a half‐daily time step, and has two vegetation carbon pools, a single aggregated soil carbon pool, and a simple soil moisture sub‐model. We used a stochastic Bayesian parameter estimation technique that provided posterior distributions of the model parameters, conditioned on the observed fluxes and the model equations. In this analysis, we estimated the values of all quantities that govern model behavior, including both rate constants and initial conditions for carbon pools. The purpose of this analysis was not to calibrate the model to make predictions about future fluxes but rather to understand how much information about process controls can be derived directly from the NEE observations. A wavelet decomposition enabled us to assess model performance at multiple time scales from diurnal to decadal. The model parameters are most highly constrained by eddy flux data at daily to seasonal time scales, suggesting that this approach is not useful for calculating annual integrals. However, the ability of the model to fit both the diurnal and seasonal variability patterns in the data simultaneously, using the same parameter set, indicates the effectiveness of this parameter estimation method. Our results quantify the extent to which the eddy covariance data contain information about the ecosystem process parameters represented in the model, and suggest several next steps in model development and observations for improved synthesis of models with flux observations.  相似文献   
79.
    
Due to concerns about data quality, McKechnie, Coe, Gerson, and Wolf ( 2016 ) questioned the conclusions of our study (Khaliq et al., 2015 ) published in this journal. Here, we argue that most of the questioned data points are in fact useful for macrophysiological analyses, mostly because the vast majority of data are explicitly reported in the peer‐reviewed physiological literature. Furthermore, we show that our conclusions remain largely robust irrespective of the data inclusion criterion. While we think that constructive debates about the adequate use of primary data in meta‐studies as well as more transparency in data inclusion criteria are indeed useful, we also emphasize that data suitability should be evaluated in the light of the scope and scale of the study in which they are used. We hope that this discussion will not discourage the exchange between disciplines such as biogeography and physiology, as this integration is needed to address some of the most urgent scientific challenges.  相似文献   
80.
Kong D  Gentz R  Zhang J 《Cytotechnology》1998,26(3):227-236
A general approach is described for the implementation of a networked multi-unit computer integrated control system. The use of data acquisition hardware and graphical programming tools alleviates tedious programming and maintains potency and flexibility. One application of the control system, the control of a mammalian cell perfusion culture based on a key nutrient glucose concentration, was demonstrated. The control system offers customized user interface for all process control parameters and allows the flexibility for continued improvement and implementation of new tailored functions. The temperature, pH, dissolved oxygen and glucose level were accurately controlled. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号