首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4435篇
  免费   480篇
  国内免费   284篇
  2024年   10篇
  2023年   164篇
  2022年   117篇
  2021年   174篇
  2020年   211篇
  2019年   255篇
  2018年   214篇
  2017年   218篇
  2016年   178篇
  2015年   177篇
  2014年   224篇
  2013年   269篇
  2012年   167篇
  2011年   156篇
  2010年   144篇
  2009年   196篇
  2008年   202篇
  2007年   214篇
  2006年   181篇
  2005年   174篇
  2004年   171篇
  2003年   160篇
  2002年   148篇
  2001年   123篇
  2000年   124篇
  1999年   98篇
  1998年   60篇
  1997年   52篇
  1996年   35篇
  1995年   36篇
  1994年   43篇
  1993年   40篇
  1992年   40篇
  1991年   40篇
  1990年   27篇
  1989年   23篇
  1988年   38篇
  1987年   42篇
  1986年   21篇
  1985年   31篇
  1984年   19篇
  1983年   28篇
  1982年   24篇
  1981年   27篇
  1980年   22篇
  1979年   16篇
  1978年   15篇
  1977年   11篇
  1976年   12篇
  1971年   6篇
排序方式: 共有5199条查询结果,搜索用时 46 毫秒
61.
Leveraging information in aggregate data from external sources to improve estimation efficiency and prediction accuracy with smaller scale studies has drawn a great deal of attention in recent years. Yet, conventional methods often either ignore uncertainty in the external information or fail to account for the heterogeneity between internal and external studies. This article proposes an empirical likelihood-based framework to improve the estimation of the semiparametric transformation models by incorporating information about the t-year subgroup survival probability from external sources. The proposed estimation procedure incorporates an additional likelihood component to account for uncertainty in the external information and employs a density ratio model to characterize population heterogeneity. We establish the consistency and asymptotic normality of the proposed estimator and show that it is more efficient than the conventional pseudopartial likelihood estimator without combining information. Simulation studies show that the proposed estimator yields little bias and outperforms the conventional approach even in the presence of information uncertainty and heterogeneity. The proposed methodologies are illustrated with an analysis of a pancreatic cancer study.  相似文献   
62.
Kaitlyn Cook  Wenbin Lu  Rui Wang 《Biometrics》2023,79(3):1670-1685
The Botswana Combination Prevention Project was a cluster-randomized HIV prevention trial whose follow-up period coincided with Botswana's national adoption of a universal test and treat strategy for HIV management. Of interest is whether, and to what extent, this change in policy modified the preventative effects of the study intervention. To address such questions, we adopt a stratified proportional hazards model for clustered interval-censored data with time-dependent covariates and develop a composite expectation maximization algorithm that facilitates estimation of model parameters without placing parametric assumptions on either the baseline hazard functions or the within-cluster dependence structure. We show that the resulting estimators for the regression parameters are consistent and asymptotically normal. We also propose and provide theoretical justification for the use of the profile composite likelihood function to construct a robust sandwich estimator for the variance. We characterize the finite-sample performance and robustness of these estimators through extensive simulation studies. Finally, we conclude by applying this stratified proportional hazards model to a re-analysis of the Botswana Combination Prevention Project, with the national adoption of a universal test and treat strategy now modeled as a time-dependent covariate.  相似文献   
63.
Use of historical data and real-world evidence holds great potential to improve the efficiency of clinical trials. One major challenge is to effectively borrow information from historical data while maintaining a reasonable type I error and minimal bias. We propose the elastic prior approach to address this challenge. Unlike existing approaches, this approach proactively controls the behavior of information borrowing and type I errors by incorporating a well-known concept of clinically significant difference through an elastic function, defined as a monotonic function of a congruence measure between historical data and trial data. The elastic function is constructed to satisfy a set of prespecified criteria such that the resulting prior will strongly borrow information when historical and trial data are congruent, but refrain from information borrowing when historical and trial data are incongruent. The elastic prior approach has a desirable property of being information borrowing consistent, that is, asymptotically controls type I error at the nominal value, no matter that historical data are congruent or not to the trial data. Our simulation study that evaluates the finite sample characteristic confirms that, compared to existing methods, the elastic prior has better type I error control and yields competitive or higher power. The proposed approach is applicable to binary, continuous, and survival endpoints.  相似文献   
64.
We study bias-reduced estimators of exponentially transformed parameters in general linear models (GLMs) and show how they can be used to obtain bias-reduced conditional (or unconditional) odds ratios in matched case-control studies. Two options are considered and compared: the explicit approach and the implicit approach. The implicit approach is based on the modified score function where bias-reduced estimates are obtained by using iterative procedures to solve the modified score equations. The explicit approach is shown to be a one-step approximation of this iterative procedure. To apply these approaches for the conditional analysis of matched case-control studies, with potentially unmatched confounding and with several exposures, we utilize the relation between the conditional likelihood and the likelihood of the unconditional logit binomial GLM for matched pairs and Cox partial likelihood for matched sets with appropriately setup data. The properties of the estimators are evaluated by using a large Monte Carlo simulation study and an illustration of a real dataset is shown. Researchers reporting the results on the exponentiated scale should use bias-reduced estimators since otherwise the effects can be under or overestimated, where the magnitude of the bias is especially large in studies with smaller sample sizes.  相似文献   
65.
Analysts often estimate treatment effects in observational studies using propensity score matching techniques. When there are missing covariate values, analysts can multiply impute the missing data to create m completed data sets. Analysts can then estimate propensity scores on each of the completed data sets, and use these to estimate treatment effects. However, there has been relatively little attention on developing imputation models to deal with the additional problem of missing treatment indicators, perhaps due to the consequences of generating implausible imputations. However, simply ignoring the missing treatment values, akin to a complete case analysis, could also lead to problems when estimating treatment effects. We propose a latent class model to multiply impute missing treatment indicators. We illustrate its performance through simulations and with data taken from a study on determinants of children's cognitive development. This approach is seen to obtain treatment effect estimates closer to the true treatment effect than when employing conventional imputation procedures as well as compared to a complete case analysis.  相似文献   
66.
The gold standard for investigating the efficacy of a new therapy is a (pragmatic) randomized controlled trial (RCT). This approach is costly, time-consuming, and not always practicable. At the same time, huge quantities of available patient-level control condition data in analyzable format of (former) RCTs or real-world data (RWD) are neglected. Therefore, alternative study designs are desirable. The design presented here consists of setting up a prediction model for determining treatment effects under the control condition for future patients. When a new treatment is intended to be tested against a control treatment, a single-arm trial for the new therapy is conducted. The treatment effect is then evaluated by comparing the outcomes of the single-arm trial against the predicted outcomes under the control condition. While there are obvious advantages of this design compared to classical RCTs (increased efficiency, lower cost, alleviating participants’ fear of being on control treatment), there are several sources of bias. Our aim is to investigate whether and how such a design—the prediction design—may be used to provide information on treatment effects by leveraging external data sources. For this purpose, we investigated under what assumptions linear prediction models could be used to predict the counterfactual of patients precisely enough to construct a test and an appropriate sample size formula for evaluating the average treatment effect in the population of a new study. A user-friendly R Shiny application (available at: https://web.imbi.uni-heidelberg.de/PredictionDesignR/ ) facilitates the application of the proposed methods, while a real-world application example illustrates them.  相似文献   
67.
The turnover measurement of proteins and proteoforms has been largely facilitated by workflows coupling metabolic labeling with mass spectrometry (MS), including dynamic stable isotope labeling by amino acids in cell culture (dynamic SILAC) or pulsed SILAC (pSILAC). Very recent studies including ours have integrated themeasurement of post-translational modifications (PTMs) at the proteome level (i.e., phosphoproteomics) with pSILAC experiments in steady state systems, exploring the link between PTMs and turnover at the proteome-scale. An open question in the field is how to exactly interpret these complex datasets in a biological perspective. Here, we present a novel pSILAC phosphoproteomic dataset which was obtained during a dynamic process of cell starvation using data-independent acquisition MS (DIA-MS). To provide an unbiased “hypothesis-free” analysis framework, we developed a strategy to interrogate how phosphorylation dynamically impacts protein turnover across the time series data. With this strategy, we discovered a complex relationship between phosphorylation and protein turnover that was previously underexplored. Our results further revealed a link between phosphorylation stoichiometry with the turnover of phosphorylated peptidoforms. Moreover, our results suggested that phosphoproteomic turnover diversity cannot directly explain the abundance regulation of phosphorylation during cell starvation, underscoring the importance of future studies addressing PTM site-resolved protein turnover.  相似文献   
68.
Research data management (RDM) requires standards, policies, and guidelines. Findable, accessible, interoperable, and reusable (FAIR) data management is critical for sustainable research. Therefore, collaborative approaches for managing FAIR-structured data are becoming increasingly important for long-term, sustainable RDM. However, they are rather hesitantly applied in bioengineering. One of the reasons may be found in the interdisciplinary character of the research field. In addition, bioengineering as application of principles of biology and tools of process engineering, often have to meet different criteria. In consequence, RDM is complicated by the fact that researchers from different scientific institutions must meet the criteria of their home institution, which can lead to additional conflicts. Therefore, centrally provided general repositories implementing a collaborative approach that enables data storage from the outset In a biotechnology research network with over 20 tandem projects, it was demonstrated how FAIR-RDM can be implemented through a collaborative approach and the use of a data structure. In addition, the importance of a structure within a repository was demonstrated to keep biotechnology research data available throughout the entire data lifecycle. Furthermore, the biotechnology research network highlighted the importance of a structure within a repository to keep research data available throughout the entire data lifecycle.  相似文献   
69.
DNA microarray technology permits the study of biological systems and processes on a genome-wide scale. Arrays based on cDNA clones, oligonucleotides and genomic clones have been developed for investigations of gene expression, genetic analysis and genomic changes associated with disease. Over the past 3-4 years, microarrays have become more widely available to the research community. This has occurred through increased commercial availability of custom and generic arrays and the development of robotic equipment that has enabled array printing and analysis facilities to be established in academic research institutions. This brief review examines the public and commercial resources, the microarray fabrication and data capture and analysis equipment currently available to the user.  相似文献   
70.
The passive membrane properties of the tangential cells in the fly lobula plate (CH, HS, and VS cells, Fig. 1) were determined by combining compartmental modeling and current injection experiments. As a prerequisite, we built a digital base of the cells by 3D-reconstructing individual tangential cells from cobalt-stained material including both CH cells (VCH and DCH cells), all three HS cells (HSN, HSE, and HSS cells) and most members of the VS cell family (Figs. 2, 3). In a first series of experiments, hyperpolarizing and depolarizing currents were injected to determine steady-state I-V curves (Fig. 4). At potentials more negative than resting, a linear relationship holds, whereas at potentials more positive than resting, an outward rectification is observed. Therefore, in all subsequent experiments, when a sinusoidal current of variable frequency was injected, a negative DC current was superimposed to keep the neurons in a hyperpolarized state. The resulting amplitude and phase spectra revealed an average steady-state input resistance of 4 to 5 M and a cut-off frequency between 40 and 80 Hz (Fig. 5). To determine the passive membrane parameters R m (specific membrane resistance), R i (specific internal resistivity), and C m (specific membrane capacitance), the experiments were repeated in computer simulations on compartmental models of the cells (Fig. 6). Good fits between experimental and simulation data were obtained for the following values: R m = 2.5 kcm2, R i = 60 cm, and C m = 1.5 F/cm2 for CH cells; R m = 2.0 kcm2, R i = 40 cm, and C m = 0.9 F/cm2 for HS cells; R m = 2.0 kcm2, R i = 40 cm, and C m = 0.8 F/cm2 for VS cells. An error analysis of the fitting procedure revealed an area of confidence in the R m -R i plane within which the R m -R i value pairs are still compatible with the experimental data given the statistical fluctuations inherent in the experiments (Figs. 7, 8). We also investigated whether there exist characteristic differences between different members of the same cell class and how much the exact placement of the electrode (within ±100 m along the axon) influences the result of the simulation (Fig. 9). The membrane parameters were further examined by injection of a hyperpolarizing current pulse (Fig. 10). The resulting compartmental models (Fig. 11) based on the passive membrane parameters determined in this way form the basis of forthcoming studies on dendritic integration and signal propagation in the fly tangential cells (Haag et al., 1997; Haag and Borst, 1997).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号