首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1023篇
  免费   101篇
  国内免费   37篇
  2023年   21篇
  2022年   28篇
  2021年   33篇
  2020年   19篇
  2019年   28篇
  2018年   28篇
  2017年   34篇
  2016年   37篇
  2015年   32篇
  2014年   46篇
  2013年   68篇
  2012年   38篇
  2011年   38篇
  2010年   33篇
  2009年   53篇
  2008年   69篇
  2007年   42篇
  2006年   52篇
  2005年   38篇
  2004年   43篇
  2003年   34篇
  2002年   43篇
  2001年   29篇
  2000年   37篇
  1999年   19篇
  1998年   12篇
  1997年   11篇
  1996年   9篇
  1995年   13篇
  1994年   6篇
  1993年   12篇
  1992年   11篇
  1991年   14篇
  1990年   19篇
  1989年   8篇
  1988年   9篇
  1987年   13篇
  1986年   14篇
  1985年   10篇
  1984年   12篇
  1983年   4篇
  1982年   7篇
  1980年   6篇
  1979年   3篇
  1978年   2篇
  1976年   3篇
  1975年   3篇
  1974年   4篇
  1973年   4篇
  1972年   4篇
排序方式: 共有1161条查询结果,搜索用时 31 毫秒
941.
The method of generalized least squares (GLS) is used to assess the variance function for isothermal titration calorimetry (ITC) data collected for the 1:1 complexation of Ba(2+) with 18-crown-6 ether. In the GLS method, the least squares (LS) residuals from the data fit are themselves fitted to a variance function, with iterative adjustment of the weighting function in the data analysis to produce consistency. The data are treated in a pooled fashion, providing 321 fitted residuals from 35 data sets in the final analysis. Heteroscedasticity (nonconstant variance) is clearly indicated. Data error terms proportional to q(i) and q(i)/v are well defined statistically, where q(i) is the heat from the ith injection of titrant and v is the injected volume. The statistical significance of the variance function parameters is confirmed through Monte Carlo calculations that mimic the actual data set. For the data in question, which fall mostly in the range of q(i)=100-2000 microcal, the contributions to the data variance from the terms in q(i)(2) typically exceed the background constant term for q(i)>300 microcal and v<10 microl. Conversely, this means that in reactions with q(i) much less than this, heteroscedasticity is not a significant problem. Accordingly, in such cases the standard unweighted fitting procedures provide reliable results for the key parameters, K and DeltaH(degrees) and their statistical errors. These results also support an important earlier finding: in most ITC work on 1:1 binding processes, the optimal number of injections is 7-10, which is a factor of 3 smaller than the current norm. For high-q reactions, where weighting is needed for optimal LS analysis, tips are given for using the weighting option in the commercial software commonly employed to process ITC data.  相似文献   
942.
Rivest LP  Daigle G 《Biometrics》2004,60(1):100-107
The robust design is a method for implementing a mark-recapture experiment featuring a nested sampling structure. The first level consists of primary sampling sessions; the population experiences mortality and immigration between primary sessions so that open population models apply at this level. The second level of sampling has a short mark-recapture study within each primary session. Closed population models are used at this stage to estimate the animal abundance at each primary session. This article suggests a loglinear technique to fit the robust design. Loglinear models for the analysis of mark-recapture data from closed and open populations are first reviewed. These two types of models are then combined to analyze the data from a robust design. The proposed loglinear approach to the robust design allows incorporating parameters for a heterogeneity in the capture probabilities of the units within each primary session. Temporary emigration out of the study area can also be accounted for in the loglinear framework. The analysis is relatively simple; it relies on a large Poisson regression with the vector of frequencies of the capture histories as dependent variable. An example concerned with the estimation of abundance and survival of the red-back vole in an area of southeastern Québec is presented.  相似文献   
943.
944.
The positive and negative predictive values are standard ways of quantifying predictive accuracy when both the outcome and the prognostic factor are binary. Methods for comparing the predictive values of two or more binary factors have been discussed previously (Leisenring et al., 2000, Biometrics 56, 345-351). We propose extending the standard definitions of the predictive values to accommodate prognostic factors that are measured on a continuous scale and suggest a corresponding graphical method to summarize predictive accuracy. Drawing on the work of Leisenring et al. we make use of a marginal regression framework and discuss methods for estimating these predictive value functions and their differences within this framework. The methods presented in this paper have the potential to be useful in a number of areas including the design of clinical trials and health policy analysis.  相似文献   
945.
Onufriev A  Bashford D  Case DA 《Proteins》2004,55(2):383-394
Implicit solvation models provide, for many applications, a reasonably accurate and computationally effective way to describe the electrostatics of aqueous solvation. Here, a popular analytical Generalized Born (GB) solvation model is modified to improve its accuracy in calculating the solvent polarization part of free energy changes in large-scale conformational transitions, such as protein folding. In contrast to an earlier GB model (implemented in the AMBER-6 program), the improved version does not overstabilize the native structures relative to the finite-difference Poisson-Boltzmann continuum treatment. In addition to improving the energy balance between folded and unfolded conformers, the algorithm (available in the AMBER-7 and NAB molecular modeling packages) is shown to perform well in more than 50 ns of native-state molecular dynamics (MD) simulations of thioredoxin, protein-A, and ubiquitin, as well as in a simulation of Barnase/Barstar complex formation. For thioredoxin, various combinations of input parameters have been explored, such as the underlying gas-phase force fields and the atomic radii. The best performance is achieved with a previously proposed modification to the torsional potential in the Amber ff99 force field, which yields stable native trajectories for all of the tested proteins, with backbone root-mean-square deviations from the native structures being approximately 1.5 A after 6 ns of simulation time. The structure of Barnase/Barstar complex is regenerated, starting from an unbound state, to within 1.9 A relative to the crystal structure of the complex.  相似文献   
946.
We have characterized a novel monoclonal antibody, Tau-66, raised against recombinant human tau. Immunohistochemistry using Tau-66 reveals a somatic-neuronal stain in the superior temporal gyrus (STG) that is more intense in Alzheimer's disease (AD) brain than in normal brain. In hippocampus, Tau-66 yields a pattern similar to STG, except that neurofibrillary lesions are preferentially stained if present. In mild AD cases, Tau-66 stains plaques lacking obvious dystrophic neurites (termed herein 'diffuse reticulated plaques') in STG and the hippocampus. Enzyme-linked immunosorbent assay (ELISA) analysis reveals that Tau-66 is specific for tau, as there is no cross-reactivity with MAP2, tubulin, Abeta(1-40), or Abeta(1-42), although Tau-66 fails to react with tau or any other polypeptide on western blots. The epitope of Tau-66, as assessed by ELISA testing of tau deletion mutants, appears discontinuous, requiring residues 155-244 and 305-314. Tau-66 reactivity exhibits buffer and temperature sensitivity in an ELISA format and is readily abolished by SDS treatment. Taken together these lines of evidence indicate that the Tau-66 epitope is conformation-dependent, perhaps involving a close interaction of the proline-rich and the third microtubule-binding regions. This is the first indication that tau can undergo this novel folding event and that this conformation of tau is involved in AD pathology.  相似文献   
947.
Whether the aim is to diagnose individuals or estimate prevalence, many epidemiological studies have demonstrated the successful use of tests on pooled sera. These tests detect whether at least one sample in the pool is positive. Although originally designed to reduce diagnostic costs, testing pools also lowers false positive and negative rates in low prevalence settings and yields more precise prevalence estimates. Current methods are aimed at estimating the average population risk from diagnostic tests on pools. In this article, we extend the original class of risk estimators to adjust for covariates recorded on individual pool members. Maximum likelihood theory provides a flexible estimation method that handles different covariate values in the pool, different pool sizes, and errors in test results. In special cases, software for generalized linear models can be used. Pool design has a strong impact on precision and cost efficiency, with covariate-homogeneous pools carrying the largest amount of information. We perform joint pool and sample size calculations using information from individual contributors to the pool and show that a good design can severely reduce cost and yet increase precision. The methods are illustrated using data from a Kenyan surveillance study of HIV. Compared to individual testing, age-homogeneous, optimal-sized pools of average size seven reduce cost to 44% of the original price with virtually no loss in precision.  相似文献   
948.
Coull BA  Agresti A 《Biometrics》2000,56(1):73-80
The multivariate binomial logit-normal distribution is a mixture distribution for which, (i) conditional on a set of success probabilities and sample size indices, a vector of counts is independent binomial variates, and (ii) the vector of logits of the parameters has a multivariate normal distribution. We use this distribution to model multivariate binomial-type responses using a vector of random effects. The vector of logits of parameters has a mean that is a linear function of explanatory variables and has an unspecified or partly specified covariance matrix. The model generalizes and provides greater flexibility than the univariate model that uses a normal random effect to account for positive correlations in clustered data. The multivariate model is useful when different elements of the response vector refer to different characteristics, each of which may naturally have its own random effect. It is also useful for repeated binary measurement of a single response when there is a nonexchangeable association structure, such as one often expects with longitudinal data or when negative association exists for at least one pair of responses. We apply the model to an influenza study with repeated responses in which some pairs are negatively associated and to a developmental toxicity study with continuation-ratio logits applied to an ordinal response with clustered observations.  相似文献   
949.
Dallas MJ  Rao PV 《Biometrics》2000,56(1):154-159
We introduce two test procedures for comparing two survival distributions on the basis of randomly right-censored data consisting of both paired and unpaired observations. Our procedures are based on generalizations of a pooled rank test statistic previously proposed for uncensored data. One generalization adapts the Prentice-Wilcoxon score, while the other adapts the Akritas score. The use of these particular scoring systems in pooled rank tests with randomly right-censored paired data has been advocated by several researchers. Our test procedures utilize the permutation distributions of the test statistics based on a novel manner of permuting the scores. Permutation versions of tests for right-censored paired data and for two independent right-censored samples that use the proposed scoring systems are obtained as special cases of our test procedures. Simulation results show that our test procedures have high power for detecting scale and location shifts in exponential and log-logistic distributions for the survival times. We also demonstrate the advantages of our test procedures in terms of utilizing randomly occurring unpaired observations that are discarded in test procedures for paired data. The tests are applied to skin graft data previously reported elsewhere.  相似文献   
950.
Wang CY  Wang N  Wang S 《Biometrics》2000,56(2):487-495
We consider regression analysis when covariate variables are the underlying regression coefficients of another linear mixed model. A naive approach is to use each subject's repeated measurements, which are assumed to follow a linear mixed model, and obtain subject-specific estimated coefficients to replace the covariate variables. However, directly replacing the unobserved covariates in the primary regression by these estimated coefficients may result in a significantly biased estimator. The aforementioned problem can be evaluated as a generalization of the classical additive error model where repeated measures are considered as replicates. To correct for these biases, we investigate a pseudo-expected estimating equation (EEE) estimator, a regression calibration (RC) estimator, and a refined version of the RC estimator. For linear regression, the first two estimators are identical under certain conditions. However, when the primary regression model is a nonlinear model, the RC estimator is usually biased. We thus consider a refined regression calibration estimator whose performance is close to that of the pseudo-EEE estimator but does not require numerical integration. The RC estimator is also extended to the proportional hazards regression model. In addition to the distribution theory, we evaluate the methods through simulation studies. The methods are applied to analyze a real dataset from a child growth study.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号