首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1906篇
  免费   147篇
  2053篇
  2024年   1篇
  2023年   8篇
  2022年   16篇
  2021年   36篇
  2020年   33篇
  2019年   42篇
  2018年   32篇
  2017年   29篇
  2016年   55篇
  2015年   98篇
  2014年   120篇
  2013年   138篇
  2012年   191篇
  2011年   243篇
  2010年   137篇
  2009年   78篇
  2008年   125篇
  2007年   145篇
  2006年   111篇
  2005年   81篇
  2004年   88篇
  2003年   60篇
  2002年   62篇
  2001年   16篇
  2000年   11篇
  1999年   15篇
  1998年   11篇
  1997年   5篇
  1996年   7篇
  1995年   3篇
  1994年   3篇
  1993年   5篇
  1992年   5篇
  1991年   8篇
  1990年   5篇
  1989年   4篇
  1988年   4篇
  1987年   1篇
  1986年   4篇
  1984年   5篇
  1983年   1篇
  1980年   3篇
  1979年   3篇
  1974年   3篇
  1959年   1篇
  1920年   1篇
排序方式: 共有2053条查询结果,搜索用时 15 毫秒
21.
Leukotriene A4 hydrolase (LTA4H) is a bifunctional zinc-dependent metalloprotease bearing both an epoxide hydrolase, producing the pro-inflammatory LTB4 leukotriene, and an aminopeptidase activity, whose physiological relevance has long been ignored. Distinct substrates are commonly used for each activity, although none is completely satisfactory; LTA4, substrate for the hydrolase activity, is unstable and inactivates the enzyme, whereas aminoacids β-naphthylamide and para-nitroanilide, used as aminopeptidase substrates, are poor and nonselective. Based on the three-dimensional structure of LTA4H, we describe a new, specific, and high-affinity fluorigenic substrate, PL553 [l-(4-benzoyl)phenylalanyl-β-naphthylamide], with both in vitro and in vivo applications. PL553 possesses a catalytic efficiency (kcat/Km) of 3.8 ± 0.5 × 104 M−1 s−1 using human recombinant LTA4H and a limit of detection and quantification of less than 1 to 2 ng. The PL553 assay was validated by measuring the inhibitory potency of known LTA4H inhibitors and used to characterize new specific amino-phosphinic inhibitors. The LTA4H inhibition measured with PL553 in mouse tissues, after intravenous administration of inhibitors, was also correlated with a reduction in LTB4 levels. This authenticates the assay as the first allowing the easy measurement of endogenous LTA4H activity and in vitro specific screening of new LTA4H inhibitors.  相似文献   
22.
Lactobacillus gasseri K7 is a probiotic strain that produces bacteriocins gassericin K7 A and K7 B. In order to develop a real-time quantitative PCR assay for the detection of L. gasseri K7, 18 reference strains of the Lactobacillus acidophilus group and 45 faecal samples of adults who have never consumed strain K7 were tested with PCR using 14 pairs of primers specific for gassericin K7 A and K7 B gene determinants. Incomplete gassericin K7 A or K7 B gene clusters were found to be dispersed in different lactobacilli strains as well as in faecal microbiota. One pair of primers was found to be specific for the total gene cluster of gassericin K7A and one for gassericin K7B. The real-time PCR analysis of faecal samples spiked with K7 strain revealed that primers specific for the gene cluster of the gassericin K7 A were more suitable for quantitative determination than those for gassericin K7 B, due to the lower detection level. Targeting of the gassericin K7 A or K7 B gene cluster with specific primers could be used for detection and quantification of L. gasseri K7 in human faecal samples without prior cultivation. The results of this study also present new insights into the prevalence of bacteriocin-encoding genes in gastrointestinal tract.  相似文献   
23.
24.
The ‘Atribacteria'' is a candidate phylum in the Bacteria recently proposed to include members of the OP9 and JS1 lineages. OP9 and JS1 are globally distributed, and in some cases abundant, in anaerobic marine sediments, geothermal environments, anaerobic digesters and reactors and petroleum reservoirs. However, the monophyly of OP9 and JS1 has been questioned and their physiology and ecology remain largely enigmatic due to a lack of cultivated representatives. Here cultivation-independent genomic approaches were used to provide a first comprehensive view of the phylogeny, conserved genomic features and metabolic potential of members of this ubiquitous candidate phylum. Previously available and heretofore unpublished OP9 and JS1 single-cell genomic data sets were used as recruitment platforms for the reconstruction of atribacterial metagenome bins from a terephthalate-degrading reactor biofilm and from the monimolimnion of meromictic Sakinaw Lake. The single-cell genomes and metagenome bins together comprise six species- to genus-level groups that represent most major lineages within OP9 and JS1. Phylogenomic analyses of these combined data sets confirmed the monophyly of the ‘Atribacteria'' inclusive of OP9 and JS1. Additional conserved features within the ‘Atribacteria'' were identified, including a gene cluster encoding putative bacterial microcompartments that may be involved in aldehyde and sugar metabolism, energy conservation and carbon storage. Comparative analysis of the metabolic potential inferred from these data sets revealed that members of the ‘Atribacteria'' are likely to be heterotrophic anaerobes that lack respiratory capacity, with some lineages predicted to specialize in either primary fermentation of carbohydrates or secondary fermentation of organic acids, such as propionate.  相似文献   
25.
Single amplified genomes and genomes assembled from metagenomes have enabled the exploration of uncultured microorganisms at an unprecedented scale. However, both these types of products are plagued by contamination. Since these genomes are now being generated in a high-throughput manner and sequences from them are propagating into public databases to drive novel scientific discoveries, rigorous quality controls and decontamination protocols are urgently needed. Here, we present ProDeGe (Protocol for fully automated Decontamination of Genomes), the first computational protocol for fully automated decontamination of draft genomes. ProDeGe classifies sequences into two classes—clean and contaminant—using a combination of homology and feature-based methodologies. On average, 84% of sequence from the non-target organism is removed from the data set (specificity) and 84% of the sequence from the target organism is retained (sensitivity). The procedure operates successfully at a rate of ~0.30 CPU core hours per megabase of sequence and can be applied to any type of genome sequence.Recent technological advancements have enabled the large-scale sampling of genomes from uncultured microbial taxa, through the high-throughput sequencing of single amplified genomes (SAGs; Rinke et al., 2013; Swan et al., 2013) and assembly and binning of genomes from metagenomes (GMGs; Cuvelier et al., 2010; Sharon and Banfield, 2013). The importance of these products in assessing community structure and function has been established beyond doubt (Kalisky and Quake, 2011). Multiple Displacement Amplification (MDA) and sequencing of single cells has been immensely successful in capturing rare and novel phyla, generating valuable references for phylogenetic anchoring. However, efforts to conduct MDA and sequencing in a high-throughput manner have been heavily impaired by contamination from DNA introduced by the environmental sample, as well as introduced during the MDA or sequencing process (Woyke et al., 2011; Engel et al., 2014; Field et al., 2014). Similarly, metagenome binning and assembly often carries various errors and artifacts depending on the methods used (Nielsen et al., 2014). Even cultured isolate genomes have been shown to lack immunity to contamination with other species (Parks et al., 2014; Mukherjee et al., 2015). As sequencing of these genome product types rapidly increases, contaminant sequences are finding their way into public databases as reference sequences. It is therefore extremely important to define standardized and automated protocols for quality control and decontamination, which would go a long way towards establishing quality standards for all microbial genome product types.Current procedures for decontamination and quality control of genome sequences in single cells and metagenome bins are heavily manual and can consume hours/megabase when performed by expert biologists. Supervised decontamination typically involves homology-based inspection of ribosomal RNA sequences and protein coding genes, as well as visual analysis of k-mer frequency plots and guanine–cytosine content (Clingenpeel, 2015). Manual decontamination is also possible through the software SmashCell (Harrington et al., 2010), which contains a tool for visual identification of contaminants from a self-organizing map and corresponding U-matrix. Another existing software tool, DeconSeq (Schmieder and Edwards, 2011), automatically removes contaminant sequences, however, the contaminant databases are required input. The former lacks automation, whereas the latter requires prior knowledge of contaminants, rendering both applications impractical for high-throughput decontamination.Here, we introduce ProDeGe, the first fully automated computational protocol for decontamination of genomes. ProDeGe uses a combination of homology-based and sequence composition-based approaches to separate contaminant sequences from the target genome draft. It has been pre-calibrated to discard at least 84% of the contaminant sequence, which results in retention of a median 84% of the target sequence. The standalone software is freely available at http://prodege.jgi-psf.org//downloads/src and can be run on any system that has Perl, R (R Core Team, 2014), Prodigal (Hyatt et al., 2010) and NCBI Blast (Camacho et al., 2009) installed. A graphical viewer allowing further exploration of data sets and exporting of contigs accompanies the web application for ProDeGe at http://prodege.jgi-psf.org, which is open to the wider scientific community as a decontamination service (Supplementary Figure S1).The assembly and corresponding NCBI taxonomy of the data set to be decontaminated are required inputs to ProDeGe (Figure 1a). Contigs are annotated with genes following which, eukaryotic contamination is removed based on homology of genes at the nucleotide level using the eukaryotic subset of NCBI''s Nucleotide database as the reference. For detecting prokaryotic contamination, a curated database of reference contigs from the set of high-quality genomes within the Integrated Microbial Genomes (IMG; Markowitz et al., 2014) system is used as the reference. This ensures that errors in public reference databases due to poor quality of sequencing, assembly and annotation do not negatively impact the decontamination process. Contigs determined as belonging to the target organism based on nucleotide level homology to sequences in the above database are defined as ‘Clean'', whereas those aligned to other organisms are defined as ‘Contaminant''. Contigs whose origin cannot be determined based on alignment are classified as ‘Undecided''. Classified clean and contaminated contigs are used to calibrate the separation in the subsequent 5-mer based binning module, which classifies undecided contigs as ‘Clean'' or ‘Contaminant'' using principal components analysis (PCA) of 5-mer frequencies. This parameter can also be specified by the user. When data sets do not have taxonomy deeper than phylum level, or a single confident taxonomic bin cannot be detected using sequence alignment, solely 9-mer based binning is used due to more accurate overall classification. In the absence of a user-defined cutoff, a pre-calibrated cutoff for 80% or more specificity separates the clean contigs from contaminated sequences in the resulting PCA of the 9-mer frequency matrix. Details on ProDeGe''s custom database, evaluation of the performance of the system and exploration of the parameter space to calibrate ProDeGe for a high accurate classification rate are provided in the Supplementary Material.Open in a separate windowFigure 1(a) Schematic overview of the ProDeGe engine. (b) Features of data sets used to validate ProDeGe: SAGs from the Arabidopsis endophyte sequencing project, MDM project, public data sets found in IMG but not sequenced at the JGI, as well as genomes from metagenomes. All the data and results can be found in Supplementary Table S3.The performance of ProDeGe was evaluated using 182 manually screened SAGs (Figure 1b,Supplementary Table S1) from two studies whose data sets are publicly available within the IMG system: genomes of 107 SAGs from an Arabidopsis endophyte sequencing project and 75 SAGs from the Microbial Dark Matter (MDM) project* (only 75/201 SAGs from the MDM project had 1:1 mapping between contigs in the unscreened and the manually screened versions, hence these were used; Rinke et al., 2013). Manual curation of these SAGs demonstrated that the use of ProDeGe prevented 5311 potentially contaminated contigs in these data sets from entering public databases. Figure 2a demonstrates the sensitivity vs specificity plot of ProDeGe results for the above data sets. Most of the data points in Figure 2a cluster in the top right of the box reflecting a median retention of 89% of the clean sequence (sensitivity) and a median rejection of 100% of the sequence of contaminant origin (specificity). In addition, on average, 84% of the bases of a data set are accurately classified. ProDeGe performs best when the target organism has sequenced homologs at the class level or deeper in its high-quality prokaryotic nucleotide reference database. If the target organism''s taxonomy is unknown or not deeper than domain level, or there are few contigs with taxonomic assignments, a target bin cannot be assessed and thus ProDeGe removes contaminant contigs using sequence composition only. The few samples in Figure 2a that demonstrate a higher rate of false positives (lower specificity) and/or reduced sensitivity typically occur when the data set contains few contaminant contigs or ProDeGe incorrectly assumes that the largest bin is the target bin. Some data sets contain a higher proportion of contamination than target sequence and ProDeGe''s performance can suffer under this condition. However, under all other conditions, ProDeGe demonstrates high speed, specificity and sensitivity (Figure 2). In addition, ProDeGe demonstrates better performance in overall classification when nucleotides are considered than when contigs are considered, illustrating that longer contigs are more accurately classified (Supplementary Table S1).Open in a separate windowFigure 2ProDeGe accuracy and performance scatterplots of 182 manually curated single amplified genomes (SAGs), where each symbol represents one SAG data set. (a) Accuracy shown by sensitivity (proportion of bases confirmed ‘Clean'') vs specificity (proportion of bases confirmed ‘Contaminant'') from the Endophyte and Microbial Dark Matter (MDM) data sets. Symbol size reflects input data set size in megabases. Most points cluster in the top right of the plot, showing ProDeGe''s high accuracy. Median and average overall results are shown in Supplementary Table S1. (b) ProDeGe completion time in central processing unit (CPU) core hours for the 182 SAGs. ProDeGe operates successfully at an average rate of 0.30 CPU core hours per megabase of sequence. Principal components analysis (PCA) of a 9-mer frequency matrix costs more computationally than PCA of a 5-mer frequency matrix used with blast-binning. The lack of known taxonomy for the MDM data sets prevents blast-binning, thus showing longer finishing times than the endophyte data sets, which have known taxonomy for use in blast-binning.All SAGs used in the evaluation of ProDeGe were assembled using SPAdes (Bankevich et al., 2012). In-house testing has shown that reads assembled with SPAdes from different strains or even slightly divergent species of the same genera may be combined into the same contig (Personal communications, KT and Robert Bowers). Ideally, the DNA in a well that gets sequenced belongs to a single cell. In the best case, contaminant sequences need to be at least from a different species to be recognized as such by the homology-based screening stage. In the absence of closely related sequenced organisms, contaminant sequences need to be at least from a different genus to be recognized as such by the composition-based screening stage (Supplementary Material). Thus, there is little risk of ProDeGe separating sequences from clonal populations or strains. We have found species- and genus-level contamination in MDA samples to be rare.To evaluate the quality of publicly available uncultured genomes, ProDeGe was used to screen 185 SAGs and 14 GMGs (Figure 1b). Compared with CheckM (Parks et al., 2014), a tool which calculates an estimate of genome sequence contamination using marker genes, ProDeGe generally marks a higher proportion of sequence as ‘Contaminant'' (Supplementary Table S2). This is because ProDeGe has been calibrated to perform at high specificity levels. The command line version of ProDeGe allows users to conduct their own calibration and specify a user-defined distance cutoff. Further, CheckM only outputs the proportion of contamination, but ProDeGe actually labels each contig as ‘Clean'' or ‘Contaminant'' during the process of automated removal.The web application for ProDeGe allows users to export clean and contaminant contigs, examine contig gene calls with their corresponding taxonomies, and discover contig clusters in the first three components of their k-dimensional space. Non-linear approaches for dimensionality reduction of k-mer vectors are gaining popularity (van der Maaten and Hinton, 2008), but we observed no systematic advantage of using t-Distributed Stochastic Neighbor Embedding over PCA (Supplementary Figure S2).ProDeGe is the first step towards establishing a standard for quality control of genomes from both cultured and uncultured microorganisms. It is valuable for preventing the dissemination of contaminated sequence data into public databases, avoiding resulting misleading analyses. The fully automated nature of the pipeline relieves scientists of hours of manual screening, producing reliably clean data sets and enabling the high-throughput screening of data sets for the first time. ProDeGe, therefore, represents a critical component in our toolkit during an era of next-generation DNA sequencing and cultivation-independent microbial genomics.  相似文献   
26.

Background

Transmission patterns of sexually-transmitted infections (STIs) could relate to the structure of the underlying sexual contact network, whose features are therefore of interest to clinicians. Conventionally, we represent sexual contacts in a population with a graph, that can reveal the existence of communities. Phylogenetic methods help infer the history of an epidemic and incidentally, may help detecting communities. In particular, phylogenetic analyses of HIV-1 epidemics among men who have sex with men (MSM) have revealed the existence of large transmission clusters, possibly resulting from within-community transmissions. Past studies have explored the association between contact networks and phylogenies, including transmission clusters, producing conflicting conclusions about whether network features significantly affect observed transmission history. As far as we know however, none of them thoroughly investigated the role of communities, defined with respect to the network graph, in the observation of clusters.

Methods

The present study investigates, through simulations, community detection from phylogenies. We simulate a large number of epidemics over both unweighted and weighted, undirected random interconnected-islands networks, with islands corresponding to communities. We use weighting to modulate distance between islands. We translate each epidemic into a phylogeny, that lets us partition our samples of infected subjects into transmission clusters, based on several common definitions from the literature. We measure similarity between subjects’ island membership indices and transmission cluster membership indices with the adjusted Rand index.

Results and Conclusion

Analyses reveal modest mean correspondence between communities in graphs and phylogenetic transmission clusters. We conclude that common methods often have limited success in detecting contact network communities from phylogenies. The rarely-fulfilled requirement that network communities correspond to clades in the phylogeny is their main drawback. Understanding the link between transmission clusters and communities in sexual contact networks could help inform policymaking to curb HIV incidence in MSMs.  相似文献   
27.

Introduction

Biomarkers indicating trait, progression and prediction of pathology and symptoms in Parkinson''s disease (PD) often lack specificity or reliability. Investigating biomarker variance between individuals and over time and the effect of confounding factors is essential for the evaluation of biomarkers in PD, such as insulin-like growth factor 1 (IGF-1).

Materials and Methods

IGF-1 serum levels were investigated in up to 8 biannual visits in 37 PD patients and 22 healthy controls (HC) in the longitudinal MODEP study. IGF-1 baseline levels and annual changes in IGF-1 were compared between PD patients and HC while accounting for baseline disease duration (19 early stage: ≤3.5 years; 18 moderate stage: >4 years), age, sex, body mass index (BMI) and common medical factors putatively modulating IGF-1. In addition, associations of baseline IGF-1 with annual changes of motor, cognitive and depressive symptoms and medication dose were investigated.

Results

PD patients in moderate (130±26 ng/mL; p = .004), but not early stages (115±19, p>.1), showed significantly increased baseline IGF-1 levels compared with HC (106±24 ng/mL; p = .017). Age had a significant negative correlation with IGF-1 levels in HC (r = -.47, p = .028) and no correlation in PD patients (r = -.06, p>.1). BMI was negatively correlated in the overall group (r = -.28, p = .034). The annual changes in IGF-1 did not differ significantly between groups and were not correlated with disease duration. Baseline IGF-1 levels were not associated with annual changes of clinical parameters.

Discussion

Elevated IGF-1 in serum might differentiate between patients in moderate PD stages and HC. However, the value of serum IGF-1 as a trait-, progression- and prediction marker in PD is limited as IGF-1 showed large inter- and intraindividual variability and may be modulated by several confounders.  相似文献   
28.
29.
30.
Troglobionts are organisms that are specialized for living in a subterranean environment. These organisms reside prevalently in the deepest zones of caves and in shallow subterranean habitats, and complete their entire life cycles therein. Because troglobionts in most caves depend on organic matter resources from the surface, we hypothesized that they would also select the sections of caves nearest the surface, as long as environmental conditions were favorable. Over 1 year, we analyzed, in monthly intervals, the annual distributional dynamics of a subterranean community consisting of 17 troglobiont species, in relation to multiple environmental factors. Cumulative standardized annual species richness and diversity clearly indicated the existence of two ecotones within the cave: between soil and shallow subterranean habitats, inhabited by soil and shallow troglobionts; and between the transition and inner cave zones, where the spatial niches of shallow and deep troglobionts overlap. The mean standardized annual species richness and diversity showed inverse relationships, but both contributed to a better insight into the dynamics of subterranean fauna. Regression analyses revealed that temperatures in the range 7–10°C, high moisture content of substrate, large cross section of the cave, and high pH of substrate were the most important ecological drivers governing the spatiotemporal dynamics of troglobionts. Overall, this study shows general trends in the annual distributional dynamics of troglobionts in shallow caves and reveals that the distribution patterns of troglobionts within subterranean habitats may be more complex than commonly assumed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号