首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3380篇
  免费   316篇
  国内免费   1篇
  2021年   49篇
  2020年   25篇
  2019年   34篇
  2018年   51篇
  2017年   52篇
  2016年   72篇
  2015年   141篇
  2014年   132篇
  2013年   195篇
  2012年   170篇
  2011年   192篇
  2010年   125篇
  2009年   118篇
  2008年   177篇
  2007年   167篇
  2006年   133篇
  2005年   151篇
  2004年   163篇
  2003年   142篇
  2002年   82篇
  2001年   67篇
  2000年   67篇
  1999年   63篇
  1998年   51篇
  1997年   31篇
  1996年   24篇
  1995年   35篇
  1994年   25篇
  1993年   46篇
  1992年   48篇
  1991年   56篇
  1990年   59篇
  1989年   42篇
  1988年   44篇
  1987年   38篇
  1986年   49篇
  1985年   47篇
  1984年   40篇
  1983年   37篇
  1982年   29篇
  1981年   32篇
  1980年   23篇
  1979年   33篇
  1978年   28篇
  1977年   21篇
  1975年   20篇
  1974年   23篇
  1973年   29篇
  1972年   21篇
  1970年   20篇
排序方式: 共有3697条查询结果,搜索用时 343 毫秒
931.
A series of phenoxy benzoxaboroles were synthesized and screened for their inhibitory activity against PDE4 and cytokine release. 5-(4-Cyanophenoxy)-2,3-dihydro-1-hydroxy-2,1-benzoxaborole (AN2728) showed potent activity both in vitro and in vivo. This compound is now in clinical development for the topical treatment of psoriasis and being pursued for the topical treatment of atopic dermatitis.  相似文献   
932.
A novel class of pyrrolidinyl-acetyleneic thieno[3,2-d]pyrimidines has been identified which potently inhibit the EGFR and ErbB-2 receptor tyrosine kinases. Synthetic modifications of the pyrrolidine carbamate moiety result in a range of effects on enzyme and cellular potency. In addition, the impact of the absolute stereochemical configuration on cellular potency and oral mouse pharmacokinetics is described.  相似文献   
933.
Obesity in adolescents is associated with metabolic risk factors for type 2 diabetes, particularly insulin resistance and excessive accumulation of intrahepatic triglyceride (IHTG). The purpose of this study was to evaluate the effect of moderate weight loss on IHTG content and insulin sensitivity in obese adolescents who had normal oral glucose tolerance. Insulin sensitivity, assessed by using the hyperinsulinemic–euglycemic clamp technique in conjunction with stable isotopically labeled tracer infusion, and IHTG content, assessed by using magnetic resonance spectroscopy, were evaluated in eight obese adolescents (BMI ≥95th percentile for age and sex; age 15.3 ± 0.6 years) before and after moderate diet‐induced weight loss (8.2 ± 2.0% of initial body weight). Weight loss caused a 61.6 ± 8.5% decrease in IHTG content (P = 0.01), and improved both hepatic (56 ± 18% increase in hepatic insulin sensitivity index, P = 0.01) and skeletal muscle (97 ± 45% increase in insulin‐mediated glucose disposal, P = 0.01) insulin sensitivity. Moderate diet‐induced weight loss decreases IHTG content and improves insulin sensitivity in the liver and skeletal muscle in obese adolescents who have normal glucose tolerance. These results support the benefits of weight loss therapy in obese adolescents who do not have evidence of obesity‐related metabolic complications during a standard medical evaluation.  相似文献   
934.
935.
Two dendrochronological properties – ring width and ring chemistry – were investigated in trees near Cinder Cone in Lassen Volcanic National Park, northeastern California, for the purpose of re-evaluating the date of its eruption. Cinder Cone is thought to have erupted in AD 1666 based on ring-width evidence, but interpreting ring-width changes alone is not straightforward because many forest disturbances can cause changes in ring width. Old Jeffrey pines growing in Cinder Cone tephra and elsewhere for control comparison were sampled. Trees growing in tephra show synchronous ring-width changes at AD 1666, but this ring-width signal could be considered ambiguous for dating the eruption because changes in ring width can be caused by other events. Trees growing in tephra also show changes in ring phosphorus, sulfur, and sodium during the late 1660s, but inter-tree variability in dendrochemical signals makes dating the eruption from ring chemistry alone difficult. The combination of dendrochemistry and ring-width signals improves confidence in dating the eruption of Cinder Cone over the analysis of just one ring-growth property. These results are similar to another case study using dendrochronology of ring width and ring chemistry at Parícutin, Michoacán, Mexico, a cinder cone that erupted beginning in 1943. In both cases, combining analysis with ring width and ring chemistry improved confidence in the dendro-dating of the eruptions.  相似文献   
936.
We investigated the correlated response of several key traits of Lythrum salicaria L. to water availability gradients in introduced (Iowa, USA) and native (Switzerland, Europe) populations. This was done to investigate whether plants exhibit a shift in life-history strategy during expansion into more stressful habitats during the secondary phase of invasion, as has recently been hypothesized by Dietz and Edwards (Ecology 87(6):1359, 2006). Plants in invaded habitats exhibited a correlated increase in longevity and decrease in overall size in the transition into more stressful mesic habitats. In contrast, plants in the native range only exhibited a decrease in height. Our findings are consistent with the hypothesis that secondary invasion is taking place in L. salicaria, allowing it to be more successful under the more stressful mesic conditions in the invaded range. If this trend continues, L. salicaria may become a more problematic species in the future.  相似文献   
937.
Enabling deft data integration from numerous, voluminous andheterogeneous data sources is a major bioinformatic challenge.Several approaches have been proposed to address this challenge,including data warehousing and federated databasing. Yet despitethe rise of these approaches, integration of data from multiplesources remains problematic and toilsome. These two approachesfollow a user-to-computer communication model for data exchange,and do not facilitate a broader concept of data sharing or collaborationamong users. In this report, we discuss the potential of Web2.0 technologies to transcend this model and enhance bioinformaticsresearch. We propose a Web 2.0-based Scientific Social Community(SSC) model for the implementation of these technologies. Byestablishing a social, collective and collaborative platformfor data creation, sharing and integration, we promote a webservices-based pipeline featuring web services for computer-to-computerdata exchange as users add value. This pipeline aims to simplifydata integration and creation, to realize automatic analysis,and to facilitate reuse and sharing of data. SSC can fostercollaboration and harness collective intelligence to createand discover new knowledge. In addition to its research potential,we also describe its potential role as an e-learning platformin education. We discuss lessons from information technology,predict the next generation of Web (Web 3.0), and describe itspotential impact on the future of bioinformatics studies.   相似文献   
938.
Protein domain swapping has been repeatedly observed in a variety of proteins and is believed to result from destabilization due to mutations or changes in environment. Based on results from our studies and others, we propose that structures of the domain-swapped proteins are mainly determined by their native topologies. We performed molecular dynamics simulations of seven different proteins, known to undergo domain swapping experimentally, under mildly denaturing conditions and found in all cases that the domain-swapped structures can be recapitulated by using protein topology in a simple protein model. Our studies further indicated that, in many cases, domain swapping occurs at positions around which the protein tends to unfold prior to complete unfolding. This, in turn, enabled prediction of protein structural elements that are responsible for domain swapping. In particular, two distinct domain-swapped dimer conformations of the focal adhesion targeting domain of focal adhesion kinase were predicted computationally and were supported experimentally by data obtained from NMR analyses.  相似文献   
939.
Constructing mixtures of tagged or bar-coded DNAs for sequencing is an important requirement for the efficient use of next-generation sequencers in applications where limited sequence data are required per sample. There are many applications in which next-generation sequencing can be used effectively to sequence large mixed samples; an example is the characterization of microbial communities where ≤1,000 sequences per samples are adequate to address research questions. Thus, it is possible to examine hundreds to thousands of samples per run on massively parallel next-generation sequencers. However, the cost savings for efficient utilization of sequence capacity is realized only if the production and management costs associated with construction of multiplex pools are also scalable. One critical step in multiplex pool construction is the normalization process, whereby equimolar amounts of each amplicon are mixed. Here we compare three approaches (spectroscopy, size-restricted spectroscopy, and quantitative binding) for normalization of large, multiplex amplicon pools for performance and efficiency. We found that the quantitative binding approach was superior and represents an efficient scalable process for construction of very large, multiplex pools with hundreds and perhaps thousands of individual amplicons included. We demonstrate the increased sequence diversity identified with higher throughput. Massively parallel sequencing can dramatically accelerate microbial ecology studies by allowing appropriate replication of sequence acquisition to account for temporal and spatial variations. Further, population studies to examine genetic variation, which require even lower levels of sequencing, should be possible where thousands of individual bar-coded amplicons are examined in parallel.Emergent technologies that generate DNA sequence data are designed primarily to perform resequencing projects at reasonable cost. The result is a substantial decrease in per base costs from traditional methods. However, these next-generation platforms do not readily accommodate projects that require obtaining moderate amounts of sequence from large numbers of samples. These platforms also have per run costs that are significant and generally preclude large numbers of single-sample, nonmultiplexed runs. One example of research that is not readily supported is rRNA-directed metagenomics study of some human clinical samples or environmental rRNA analysis of samples from communities with low community diversity that require only thousands of sequences. Thus, strategies to utilize next-generation DNA sequencers efficiently for applications that require lower throughput are critical to capitalize on the efficiency and cost benefits of next-generation sequencing platforms.Directed metagenomics based on amplification of rRNA genes is an important tool to characterize microbial communities in various environmental and clinical settings. In diverse environmental samples, large numbers of sequences are required to fully characterize the microbial communities (15). However, a lower number of sequences is generally adequate to answer specific research questions. In addition, the levels of diversity in human clinical samples are usually lower than what is observed in environmental samples (for example, see reference 7).The Roche 454 genome sequencer system FLX pyrosequencer (which we will refer to as 454 FLX hereafter) is the most useful platform for rRNA-directed metagenomics because it currently provides the longest read lengths of any next-generation sequencing platform (1, 14). Computational analysis has shown that the 250-nucleotide read length (available from the 454 FLX-LR chemistry) is adequate for identification of bacteria if the amplified region is properly positioned within variable regions of the small-subunit rRNA (SSU-rRNA) gene (9, 10).In this study, we used the 454 FLX-LR genome sequencing platform and chemistry, which provides >400,000 sequences of ∼250 bp per run. After we conducted this study, a new reagent set (454 FLX-XLR titanium chemistry) was released, which further increases reads to >1,000,000 and read lengths to >400 bp (Roche). The 454 FLX platform dramatically reduces per base costs of obtaining sequence, and physical separation into between 2 and 16 lanes is available; this physical separation on the plate reduces sequencing output overall, up to 40% comparing 2 lanes versus 16 lanes. For applications where modest sequencing depth (∼1,000 sequences per sample) is adequate to address research questions, physical separation does not allow adequate sample multiplexing because even a 1/16 454 FLX-LR plate run is expected to produce ∼15,000 reads. Further, the utility of the platform as a screening tool at 16-plex is limited by cost per run.A solution to make next-generation sequencing economical for projects such as rRNA-directed metagenomics is to use bar-coded primers to multiplex amplicon pools so they can be sequenced together and computationally separated afterward (6). To successfully accomplish this strategy, precise normalization of the DNA concentrations of the individual amplicons in the multiplex pools is essential for effective multiplex sequencing when large numbers of pooled samples are sequenced in parallel. There are several potential methods available for normalizing concentrations of amplicons included in multiplex pools, but the relative and absolute performance of each approach has not been compared.In this study, we present a direct quantitative comparison of three available methods for amplicon pool normalization for downstream next-generation sequencing. The central goal of the study was to identify the most effective method for normalizing multiplex pools containing >100 individual amplicons. We evaluated each pooling approach by 454 sequencing and compared the observed frequencies of sequences from different pooled bar-coded amplicons. From these data, we determined the efficacy of each method based on the following factors: (i) how well normalized the sequences within the pool were, (ii) the proportion of samples failing to meet a minimum threshold of sequences per sample, and (iii) the overall efficiency (speed and labor required) of the process to multiplex samples.  相似文献   
940.
Despite the widespread study of genetic variation in admixed human populations, such as African-Americans, there has not been an evaluation of the effects of recent admixture on patterns of polymorphism or inferences about population demography. These issues are particularly relevant because estimates of the timing and magnitude of population growth in Africa have differed among previous studies, some of which examined African-American individuals. Here we use simulations and single-nucleotide polymorphism (SNP) data collected through direct resequencing and genotyping to investigate these issues. We find that when estimating the current population size and magnitude of recent growth in an ancestral population using the site frequency spectrum (SFS), it is possible to obtain reasonably accurate estimates of the parameters when using samples drawn from the admixed population under certain conditions. We also show that methods for demographic inference that use haplotype patterns are more sensitive to recent admixture than are methods based on the SFS. The analysis of human genetic variation data from the Yoruba people of Ibadan, Nigeria and African-Americans supports the predictions from the simulations. Our results have important implications for the evaluation of previous population genetic studies that have considered African-American individuals as a proxy for individuals from West Africa as well as for future population genetic studies of additional admixed populations.STUDIES of archeological and genetic data show that anatomically modern humans originated in Africa and more recently left Africa to populate the rest of the world (Tishkoff and Williams 2002; Barbujani and Goldstein 2004; Garrigan and Hammer 2006; Reed and Tishkoff 2006; Campbell and Tishkoff 2008; Jakobsson et al. 2008; Li et al. 2008). Given the central role Africa has played in the origin of diverse human populations, understanding patterns of genetic variation and the demographic history of populations within Africa is important for understanding the demographic history of global human populations. The availability of large-scale single-nucleotide polymorphism (SNP) data sets coupled with recent advances in statistical methodology for inferring parameters in population genetic models provides a powerful means of accomplishing these goals (Keinan et al. 2007; Boyko et al. 2008; Lohmueller et al. 2009; Nielsen et al. 2009).It is important to realize that studies of African demographic history using genetic data have come to qualitatively different conclusions regarding important parameters. Some recent studies have found evidence for ancient (>100,000 years ago) two- to fourfold growth in African populations (Adams and Hudson 2004; Marth et al. 2004; Keinan et al. 2007; Boyko et al. 2008). Other studies have found evidence of very recent growth (Pluzhnikov et al. 2002; Akey et al. 2004; Voight et al. 2005; Cox et al. 2009; Wall et al. 2009) or could not reject a model with a constant population size (Pluzhnikov et al. 2002; Voight et al. 2005). It is unclear why studies found such different parameter estimates. However, these studies all differ from each other in the amount of data considered, the types of data used (e.g., SNP genotypes vs. full resequencing), the genomic regions studied (e.g., noncoding vs. coding SNPs), and the types of demographic models considered (e.g., including migration vs. not including migration postseparation of African and non-African populations).Another important way in which studies of African demographic history differ from each other is in the populations sampled. Some studies have focused on genetic data from individuals sampled from within Africa (Pluzhnikov et al. 2002; Adams and Hudson 2004; Voight et al. 2005; Keinan et al. 2007; Cox et al. 2009; Wall et al. 2009), while other studies included American individuals with African ancestry (Adams and Hudson 2004; Akey et al. 2004; Marth et al. 2004; Boyko et al. 2008). While there is no clear correspondence between those studies which sampled native African individuals (as opposed to African-Americans) and particular growth scenarios, it is clear from previous studies that African-American populations do differ from African populations in their recent demographic history. In particular, genetic studies suggest that there is wide variation in the degree of European admixture in most African-American individuals in the United States and that they have, on average, ∼80% African ancestry and 20% European ancestry (Parra et al. 1998; Pfaff et al. 2001; Falush et al. 2003; Patterson et al. 2004; Tian et al. 2006; Lind et al. 2007; Reiner et al. 2007; Price et al. 2009; Bryc et al. 2010). Furthermore, both historical records and genetic evidence suggest that the admixture process began quite recently, within the last 20 generations (Pfaff et al. 2001; Patterson et al. 2004; Seldin et al. 2004; Tian et al. 2006). Recent population admixture can alter patterns of genetic variation in a discernible and predictable way. For example, recently admixed populations will exhibit correlation in allele frequencies (i.e., linkage disequilibrium) among markers that differ in frequency between the parental populations. This so-called admixture linkage disequilibrium (LD) (Chakraborty and Weiss 1988) can extend over long physical distances (Lautenberger et al. 2000) and decays exponentially with time the since the admixture process began (i.e., recently admixed populations typically exhibit LD over a longer physical distance than anciently admixed populations).While it is clear that African-American populations have a different recent demographic history than do African populations from within Africa and that admixture tracts can be identified in admixed individuals (Falush et al. 2003; Patterson et al. 2004; Tang et al. 2006; Sankararaman et al. 2008a,b; Price et al. 2009; Bryc et al. 2010), the effect that admixture has on other patterns of genetic variation remains unclear. For example, Xu et al. (2007) found similar LD decay patterns when comparing African-American and African populations. It is also unclear whether the recent admixture affects our ability to reconstruct ancient demographic events (such as expansions that predate the spread of humans out of Africa) from whole-genome SNP data. Most studies of demographic history have summarized the genome-wide SNP data by allele frequency or haplotype summary statistics. If these summary statistics are not sensitive to the recent European admixture, then the African-American samples may yield estimates of demographic parameters that are close to the true demographic parameters for the ancestral, unsampled, African populations. This would suggest that the differences in growth parameter estimates obtained from African populations cannot be explained by certain studies sampling African-American individuals and others sampling African individuals from within Africa. However, if these statistics are sensitive to recent admixture, then they may give biased estimates of growth parameters.Here, we examine the effect of recent admixture on the estimation of population demography. In particular, we estimate growth parameters from simulated data sets using SNP frequencies as well as a recently developed haplotype summary statistic (Lohmueller et al. 2009). We compare the demographic parameter estimates made from the admixed and nonadmixed populations and find that some parameter estimates are qualitatively similar between the two populations when inferred using allele frequencies. Inferences of growth using haplotype-based approaches appear to be more sensitive to recent admixture than inferences based on SNP frequencies. We discuss implications that our results have for interpreting studies of demography in admixed populations.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号