首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4309篇
  免费   457篇
  国内免费   237篇
  2024年   9篇
  2023年   160篇
  2022年   101篇
  2021年   171篇
  2020年   199篇
  2019年   245篇
  2018年   204篇
  2017年   203篇
  2016年   166篇
  2015年   167篇
  2014年   206篇
  2013年   256篇
  2012年   160篇
  2011年   151篇
  2010年   138篇
  2009年   193篇
  2008年   198篇
  2007年   211篇
  2006年   178篇
  2005年   168篇
  2004年   169篇
  2003年   157篇
  2002年   144篇
  2001年   120篇
  2000年   121篇
  1999年   96篇
  1998年   56篇
  1997年   52篇
  1996年   36篇
  1995年   35篇
  1994年   41篇
  1993年   39篇
  1992年   39篇
  1991年   37篇
  1990年   27篇
  1989年   22篇
  1988年   37篇
  1987年   41篇
  1986年   21篇
  1985年   29篇
  1984年   19篇
  1983年   28篇
  1982年   24篇
  1981年   27篇
  1980年   22篇
  1979年   16篇
  1978年   14篇
  1977年   11篇
  1976年   12篇
  1971年   6篇
排序方式: 共有5003条查询结果,搜索用时 15 毫秒
61.
Kaitlyn Cook  Wenbin Lu  Rui Wang 《Biometrics》2023,79(3):1670-1685
The Botswana Combination Prevention Project was a cluster-randomized HIV prevention trial whose follow-up period coincided with Botswana's national adoption of a universal test and treat strategy for HIV management. Of interest is whether, and to what extent, this change in policy modified the preventative effects of the study intervention. To address such questions, we adopt a stratified proportional hazards model for clustered interval-censored data with time-dependent covariates and develop a composite expectation maximization algorithm that facilitates estimation of model parameters without placing parametric assumptions on either the baseline hazard functions or the within-cluster dependence structure. We show that the resulting estimators for the regression parameters are consistent and asymptotically normal. We also propose and provide theoretical justification for the use of the profile composite likelihood function to construct a robust sandwich estimator for the variance. We characterize the finite-sample performance and robustness of these estimators through extensive simulation studies. Finally, we conclude by applying this stratified proportional hazards model to a re-analysis of the Botswana Combination Prevention Project, with the national adoption of a universal test and treat strategy now modeled as a time-dependent covariate.  相似文献   
62.
Use of historical data and real-world evidence holds great potential to improve the efficiency of clinical trials. One major challenge is to effectively borrow information from historical data while maintaining a reasonable type I error and minimal bias. We propose the elastic prior approach to address this challenge. Unlike existing approaches, this approach proactively controls the behavior of information borrowing and type I errors by incorporating a well-known concept of clinically significant difference through an elastic function, defined as a monotonic function of a congruence measure between historical data and trial data. The elastic function is constructed to satisfy a set of prespecified criteria such that the resulting prior will strongly borrow information when historical and trial data are congruent, but refrain from information borrowing when historical and trial data are incongruent. The elastic prior approach has a desirable property of being information borrowing consistent, that is, asymptotically controls type I error at the nominal value, no matter that historical data are congruent or not to the trial data. Our simulation study that evaluates the finite sample characteristic confirms that, compared to existing methods, the elastic prior has better type I error control and yields competitive or higher power. The proposed approach is applicable to binary, continuous, and survival endpoints.  相似文献   
63.
We study bias-reduced estimators of exponentially transformed parameters in general linear models (GLMs) and show how they can be used to obtain bias-reduced conditional (or unconditional) odds ratios in matched case-control studies. Two options are considered and compared: the explicit approach and the implicit approach. The implicit approach is based on the modified score function where bias-reduced estimates are obtained by using iterative procedures to solve the modified score equations. The explicit approach is shown to be a one-step approximation of this iterative procedure. To apply these approaches for the conditional analysis of matched case-control studies, with potentially unmatched confounding and with several exposures, we utilize the relation between the conditional likelihood and the likelihood of the unconditional logit binomial GLM for matched pairs and Cox partial likelihood for matched sets with appropriately setup data. The properties of the estimators are evaluated by using a large Monte Carlo simulation study and an illustration of a real dataset is shown. Researchers reporting the results on the exponentiated scale should use bias-reduced estimators since otherwise the effects can be under or overestimated, where the magnitude of the bias is especially large in studies with smaller sample sizes.  相似文献   
64.
Analysts often estimate treatment effects in observational studies using propensity score matching techniques. When there are missing covariate values, analysts can multiply impute the missing data to create m completed data sets. Analysts can then estimate propensity scores on each of the completed data sets, and use these to estimate treatment effects. However, there has been relatively little attention on developing imputation models to deal with the additional problem of missing treatment indicators, perhaps due to the consequences of generating implausible imputations. However, simply ignoring the missing treatment values, akin to a complete case analysis, could also lead to problems when estimating treatment effects. We propose a latent class model to multiply impute missing treatment indicators. We illustrate its performance through simulations and with data taken from a study on determinants of children's cognitive development. This approach is seen to obtain treatment effect estimates closer to the true treatment effect than when employing conventional imputation procedures as well as compared to a complete case analysis.  相似文献   
65.
The gold standard for investigating the efficacy of a new therapy is a (pragmatic) randomized controlled trial (RCT). This approach is costly, time-consuming, and not always practicable. At the same time, huge quantities of available patient-level control condition data in analyzable format of (former) RCTs or real-world data (RWD) are neglected. Therefore, alternative study designs are desirable. The design presented here consists of setting up a prediction model for determining treatment effects under the control condition for future patients. When a new treatment is intended to be tested against a control treatment, a single-arm trial for the new therapy is conducted. The treatment effect is then evaluated by comparing the outcomes of the single-arm trial against the predicted outcomes under the control condition. While there are obvious advantages of this design compared to classical RCTs (increased efficiency, lower cost, alleviating participants’ fear of being on control treatment), there are several sources of bias. Our aim is to investigate whether and how such a design—the prediction design—may be used to provide information on treatment effects by leveraging external data sources. For this purpose, we investigated under what assumptions linear prediction models could be used to predict the counterfactual of patients precisely enough to construct a test and an appropriate sample size formula for evaluating the average treatment effect in the population of a new study. A user-friendly R Shiny application (available at: https://web.imbi.uni-heidelberg.de/PredictionDesignR/ ) facilitates the application of the proposed methods, while a real-world application example illustrates them.  相似文献   
66.
The turnover measurement of proteins and proteoforms has been largely facilitated by workflows coupling metabolic labeling with mass spectrometry (MS), including dynamic stable isotope labeling by amino acids in cell culture (dynamic SILAC) or pulsed SILAC (pSILAC). Very recent studies including ours have integrated themeasurement of post-translational modifications (PTMs) at the proteome level (i.e., phosphoproteomics) with pSILAC experiments in steady state systems, exploring the link between PTMs and turnover at the proteome-scale. An open question in the field is how to exactly interpret these complex datasets in a biological perspective. Here, we present a novel pSILAC phosphoproteomic dataset which was obtained during a dynamic process of cell starvation using data-independent acquisition MS (DIA-MS). To provide an unbiased “hypothesis-free” analysis framework, we developed a strategy to interrogate how phosphorylation dynamically impacts protein turnover across the time series data. With this strategy, we discovered a complex relationship between phosphorylation and protein turnover that was previously underexplored. Our results further revealed a link between phosphorylation stoichiometry with the turnover of phosphorylated peptidoforms. Moreover, our results suggested that phosphoproteomic turnover diversity cannot directly explain the abundance regulation of phosphorylation during cell starvation, underscoring the importance of future studies addressing PTM site-resolved protein turnover.  相似文献   
67.
Research data management (RDM) requires standards, policies, and guidelines. Findable, accessible, interoperable, and reusable (FAIR) data management is critical for sustainable research. Therefore, collaborative approaches for managing FAIR-structured data are becoming increasingly important for long-term, sustainable RDM. However, they are rather hesitantly applied in bioengineering. One of the reasons may be found in the interdisciplinary character of the research field. In addition, bioengineering as application of principles of biology and tools of process engineering, often have to meet different criteria. In consequence, RDM is complicated by the fact that researchers from different scientific institutions must meet the criteria of their home institution, which can lead to additional conflicts. Therefore, centrally provided general repositories implementing a collaborative approach that enables data storage from the outset In a biotechnology research network with over 20 tandem projects, it was demonstrated how FAIR-RDM can be implemented through a collaborative approach and the use of a data structure. In addition, the importance of a structure within a repository was demonstrated to keep biotechnology research data available throughout the entire data lifecycle. Furthermore, the biotechnology research network highlighted the importance of a structure within a repository to keep research data available throughout the entire data lifecycle.  相似文献   
68.
DNA microarray technology permits the study of biological systems and processes on a genome-wide scale. Arrays based on cDNA clones, oligonucleotides and genomic clones have been developed for investigations of gene expression, genetic analysis and genomic changes associated with disease. Over the past 3-4 years, microarrays have become more widely available to the research community. This has occurred through increased commercial availability of custom and generic arrays and the development of robotic equipment that has enabled array printing and analysis facilities to be established in academic research institutions. This brief review examines the public and commercial resources, the microarray fabrication and data capture and analysis equipment currently available to the user.  相似文献   
69.
The passive membrane properties of the tangential cells in the fly lobula plate (CH, HS, and VS cells, Fig. 1) were determined by combining compartmental modeling and current injection experiments. As a prerequisite, we built a digital base of the cells by 3D-reconstructing individual tangential cells from cobalt-stained material including both CH cells (VCH and DCH cells), all three HS cells (HSN, HSE, and HSS cells) and most members of the VS cell family (Figs. 2, 3). In a first series of experiments, hyperpolarizing and depolarizing currents were injected to determine steady-state I-V curves (Fig. 4). At potentials more negative than resting, a linear relationship holds, whereas at potentials more positive than resting, an outward rectification is observed. Therefore, in all subsequent experiments, when a sinusoidal current of variable frequency was injected, a negative DC current was superimposed to keep the neurons in a hyperpolarized state. The resulting amplitude and phase spectra revealed an average steady-state input resistance of 4 to 5 M and a cut-off frequency between 40 and 80 Hz (Fig. 5). To determine the passive membrane parameters R m (specific membrane resistance), R i (specific internal resistivity), and C m (specific membrane capacitance), the experiments were repeated in computer simulations on compartmental models of the cells (Fig. 6). Good fits between experimental and simulation data were obtained for the following values: R m = 2.5 kcm2, R i = 60 cm, and C m = 1.5 F/cm2 for CH cells; R m = 2.0 kcm2, R i = 40 cm, and C m = 0.9 F/cm2 for HS cells; R m = 2.0 kcm2, R i = 40 cm, and C m = 0.8 F/cm2 for VS cells. An error analysis of the fitting procedure revealed an area of confidence in the R m -R i plane within which the R m -R i value pairs are still compatible with the experimental data given the statistical fluctuations inherent in the experiments (Figs. 7, 8). We also investigated whether there exist characteristic differences between different members of the same cell class and how much the exact placement of the electrode (within ±100 m along the axon) influences the result of the simulation (Fig. 9). The membrane parameters were further examined by injection of a hyperpolarizing current pulse (Fig. 10). The resulting compartmental models (Fig. 11) based on the passive membrane parameters determined in this way form the basis of forthcoming studies on dendritic integration and signal propagation in the fly tangential cells (Haag et al., 1997; Haag and Borst, 1997).  相似文献   
70.
Summary Triple-resonance experiments can be designed to provide useful information on spin-system topologies. In this paper we demonstrate optimized proton and carbon versions of PFG-CT-HACANH and PFG-CT-HACA(CO)NH straight-through triple-resonance experiments that allow rapid and almost complete assignments of backbone H, 13C, 15N and HN resonances in small proteins. This work provides a practical guide to using these experiments for determining resonance assignments in proteins, and for identifying both intraresidue and sequential connections involving glycine residues. Two types of delay tunings within these pulse sequences provide phase discrimination of backbone Gly C and H resonances: (i) C–H phase discrimination by tuning of the refocusing period a_f; (ii) C–C phase discrimination by tuning of the 13C constant-time evolution period 2Tc. For small proteins, C–C phase tuning provides better S/N ratios in PFG-CT-HACANH experiments while C–H phase tuning provides better S/N ratios in PFG-CT-HACA(CO)NH. These same principles can also be applied to triple-resonance experiments utilizing 13C-13C COSY and TOCSY transfer from peripheral side-chain atoms with detection of backbone amide protons for classification of side-chain spin-system topologies. Such data are valuable in algorithms for automated analysis of resonance assignments in proteins.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号