首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2771篇
  免费   227篇
  国内免费   3篇
  2023年   9篇
  2022年   14篇
  2021年   56篇
  2020年   30篇
  2019年   35篇
  2018年   38篇
  2017年   35篇
  2016年   73篇
  2015年   107篇
  2014年   122篇
  2013年   161篇
  2012年   258篇
  2011年   207篇
  2010年   143篇
  2009年   132篇
  2008年   191篇
  2007年   176篇
  2006年   196篇
  2005年   152篇
  2004年   197篇
  2003年   147篇
  2002年   139篇
  2001年   32篇
  2000年   16篇
  1999年   33篇
  1998年   35篇
  1997年   28篇
  1996年   17篇
  1995年   22篇
  1994年   18篇
  1993年   29篇
  1992年   17篇
  1991年   11篇
  1990年   17篇
  1989年   17篇
  1988年   11篇
  1987年   10篇
  1986年   13篇
  1985年   13篇
  1984年   5篇
  1983年   7篇
  1982年   11篇
  1981年   5篇
  1980年   4篇
  1979年   3篇
  1978年   1篇
  1976年   4篇
  1974年   2篇
  1969年   1篇
  1964年   1篇
排序方式: 共有3001条查询结果,搜索用时 15 毫秒
81.
82.
Control of invasions is facilitated by their early detection, but this may be difficult when invasions are cryptic due to similarity between invaders and native species. Domesticated conspecifics offer an interesting example of cryptic invasions because they have the ability to hybridize with their native counterparts, and can thus facilitate the introgression of maladaptive genes. We assessed the cryptic invasion of escaped domestic American mink (Neovison vison) within their native range. Feral mink are a known alien invader in many parts of the world, but invasion of their native range is not well understood. We genetically profiled 233 captive domestic mink from different farms in Ontario, Canada and 299 free‐ranging mink from Ontario, and used assignments tests to ascertain genetic ancestries of free‐ranging animals. We found that 18% of free‐ranging mink were either escaped domestic animals or hybrids, and a tree regression showed that these domestic genotypes were most likely to occur south of a latitude of 43.13°N, within the distribution of mink farms in Ontario. Thus, domestic mink appear not to have established populations in Ontario in locations without fur farms. We suspect that maladaptation of domestic mink and outbreeding depression of hybrid and introgressed mink have limited their spread. Mink farm density and proximity to mink farms were not important predictors of domestic genotypes but rather, certain mink farms appeared to be important sources of escaped domestic animals. Our results show that not all mink farms are equal with respect to biosecurity, and thus that the spread of domestic genotypes can be mitigated by improved biosecurity.  相似文献   
83.
Next-generation sequencing (NGS) technologies have enabled high-throughput and low-cost generation of sequence data; however, de novo genome assembly remains a great challenge, particularly for large genomes. NGS short reads are often insufficient to create large contigs that span repeat sequences and to facilitate unambiguous assembly. Plant genomes are notorious for containing high quantities of repetitive elements, which combined with huge genome sizes, makes accurate assembly of these large and complex genomes intractable thus far. Using two-color genome mapping of tiling bacterial artificial chromosomes (BAC) clones on nanochannel arrays, we completed high-confidence assembly of a 2.1-Mb, highly repetitive region in the large and complex genome of Aegilops tauschii, the D-genome donor of hexaploid wheat (Triticum aestivum). Genome mapping is based on direct visualization of sequence motifs on single DNA molecules hundreds of kilobases in length. With the genome map as a scaffold, we anchored unplaced sequence contigs, validated the initial draft assembly, and resolved instances of misassembly, some involving contigs <2 kb long, to dramatically improve the assembly from 75% to 95% complete.  相似文献   
84.
Schools of fish and flocks of birds are examples of self-organized animal groups that arise through social interactions among individuals. We numerically study two individual-based models, which recent empirical studies have suggested to explain self-organized group animal behavior: (i) a zone-based model where the group communication topology is determined by finite interacting zones of repulsion, attraction, and orientation among individuals; and (ii) a model where the communication topology is described by Delaunay triangulation, which is defined by each individual''s Voronoi neighbors. The models include a tunable parameter that controls an individual''s relative weighting of attraction and alignment. We perform computational experiments to investigate how effectively simulated groups transfer information in the form of velocity when an individual is perturbed. A cross-correlation function is used to measure the sensitivity of groups to sudden perturbations in the heading of individual members. The results show how relative weighting of attraction and alignment, location of the perturbed individual, population size, and the communication topology affect group structure and response to perturbation. We find that in the Delaunay-based model an individual who is perturbed is capable of triggering a cascade of responses, ultimately leading to the group changing direction. This phenomenon has been seen in self-organized animal groups in both experiments and nature.  相似文献   
85.
Single amplified genomes and genomes assembled from metagenomes have enabled the exploration of uncultured microorganisms at an unprecedented scale. However, both these types of products are plagued by contamination. Since these genomes are now being generated in a high-throughput manner and sequences from them are propagating into public databases to drive novel scientific discoveries, rigorous quality controls and decontamination protocols are urgently needed. Here, we present ProDeGe (Protocol for fully automated Decontamination of Genomes), the first computational protocol for fully automated decontamination of draft genomes. ProDeGe classifies sequences into two classes—clean and contaminant—using a combination of homology and feature-based methodologies. On average, 84% of sequence from the non-target organism is removed from the data set (specificity) and 84% of the sequence from the target organism is retained (sensitivity). The procedure operates successfully at a rate of ~0.30 CPU core hours per megabase of sequence and can be applied to any type of genome sequence.Recent technological advancements have enabled the large-scale sampling of genomes from uncultured microbial taxa, through the high-throughput sequencing of single amplified genomes (SAGs; Rinke et al., 2013; Swan et al., 2013) and assembly and binning of genomes from metagenomes (GMGs; Cuvelier et al., 2010; Sharon and Banfield, 2013). The importance of these products in assessing community structure and function has been established beyond doubt (Kalisky and Quake, 2011). Multiple Displacement Amplification (MDA) and sequencing of single cells has been immensely successful in capturing rare and novel phyla, generating valuable references for phylogenetic anchoring. However, efforts to conduct MDA and sequencing in a high-throughput manner have been heavily impaired by contamination from DNA introduced by the environmental sample, as well as introduced during the MDA or sequencing process (Woyke et al., 2011; Engel et al., 2014; Field et al., 2014). Similarly, metagenome binning and assembly often carries various errors and artifacts depending on the methods used (Nielsen et al., 2014). Even cultured isolate genomes have been shown to lack immunity to contamination with other species (Parks et al., 2014; Mukherjee et al., 2015). As sequencing of these genome product types rapidly increases, contaminant sequences are finding their way into public databases as reference sequences. It is therefore extremely important to define standardized and automated protocols for quality control and decontamination, which would go a long way towards establishing quality standards for all microbial genome product types.Current procedures for decontamination and quality control of genome sequences in single cells and metagenome bins are heavily manual and can consume hours/megabase when performed by expert biologists. Supervised decontamination typically involves homology-based inspection of ribosomal RNA sequences and protein coding genes, as well as visual analysis of k-mer frequency plots and guanine–cytosine content (Clingenpeel, 2015). Manual decontamination is also possible through the software SmashCell (Harrington et al., 2010), which contains a tool for visual identification of contaminants from a self-organizing map and corresponding U-matrix. Another existing software tool, DeconSeq (Schmieder and Edwards, 2011), automatically removes contaminant sequences, however, the contaminant databases are required input. The former lacks automation, whereas the latter requires prior knowledge of contaminants, rendering both applications impractical for high-throughput decontamination.Here, we introduce ProDeGe, the first fully automated computational protocol for decontamination of genomes. ProDeGe uses a combination of homology-based and sequence composition-based approaches to separate contaminant sequences from the target genome draft. It has been pre-calibrated to discard at least 84% of the contaminant sequence, which results in retention of a median 84% of the target sequence. The standalone software is freely available at http://prodege.jgi-psf.org//downloads/src and can be run on any system that has Perl, R (R Core Team, 2014), Prodigal (Hyatt et al., 2010) and NCBI Blast (Camacho et al., 2009) installed. A graphical viewer allowing further exploration of data sets and exporting of contigs accompanies the web application for ProDeGe at http://prodege.jgi-psf.org, which is open to the wider scientific community as a decontamination service (Supplementary Figure S1).The assembly and corresponding NCBI taxonomy of the data set to be decontaminated are required inputs to ProDeGe (Figure 1a). Contigs are annotated with genes following which, eukaryotic contamination is removed based on homology of genes at the nucleotide level using the eukaryotic subset of NCBI''s Nucleotide database as the reference. For detecting prokaryotic contamination, a curated database of reference contigs from the set of high-quality genomes within the Integrated Microbial Genomes (IMG; Markowitz et al., 2014) system is used as the reference. This ensures that errors in public reference databases due to poor quality of sequencing, assembly and annotation do not negatively impact the decontamination process. Contigs determined as belonging to the target organism based on nucleotide level homology to sequences in the above database are defined as ‘Clean'', whereas those aligned to other organisms are defined as ‘Contaminant''. Contigs whose origin cannot be determined based on alignment are classified as ‘Undecided''. Classified clean and contaminated contigs are used to calibrate the separation in the subsequent 5-mer based binning module, which classifies undecided contigs as ‘Clean'' or ‘Contaminant'' using principal components analysis (PCA) of 5-mer frequencies. This parameter can also be specified by the user. When data sets do not have taxonomy deeper than phylum level, or a single confident taxonomic bin cannot be detected using sequence alignment, solely 9-mer based binning is used due to more accurate overall classification. In the absence of a user-defined cutoff, a pre-calibrated cutoff for 80% or more specificity separates the clean contigs from contaminated sequences in the resulting PCA of the 9-mer frequency matrix. Details on ProDeGe''s custom database, evaluation of the performance of the system and exploration of the parameter space to calibrate ProDeGe for a high accurate classification rate are provided in the Supplementary Material.Open in a separate windowFigure 1(a) Schematic overview of the ProDeGe engine. (b) Features of data sets used to validate ProDeGe: SAGs from the Arabidopsis endophyte sequencing project, MDM project, public data sets found in IMG but not sequenced at the JGI, as well as genomes from metagenomes. All the data and results can be found in Supplementary Table S3.The performance of ProDeGe was evaluated using 182 manually screened SAGs (Figure 1b,Supplementary Table S1) from two studies whose data sets are publicly available within the IMG system: genomes of 107 SAGs from an Arabidopsis endophyte sequencing project and 75 SAGs from the Microbial Dark Matter (MDM) project* (only 75/201 SAGs from the MDM project had 1:1 mapping between contigs in the unscreened and the manually screened versions, hence these were used; Rinke et al., 2013). Manual curation of these SAGs demonstrated that the use of ProDeGe prevented 5311 potentially contaminated contigs in these data sets from entering public databases. Figure 2a demonstrates the sensitivity vs specificity plot of ProDeGe results for the above data sets. Most of the data points in Figure 2a cluster in the top right of the box reflecting a median retention of 89% of the clean sequence (sensitivity) and a median rejection of 100% of the sequence of contaminant origin (specificity). In addition, on average, 84% of the bases of a data set are accurately classified. ProDeGe performs best when the target organism has sequenced homologs at the class level or deeper in its high-quality prokaryotic nucleotide reference database. If the target organism''s taxonomy is unknown or not deeper than domain level, or there are few contigs with taxonomic assignments, a target bin cannot be assessed and thus ProDeGe removes contaminant contigs using sequence composition only. The few samples in Figure 2a that demonstrate a higher rate of false positives (lower specificity) and/or reduced sensitivity typically occur when the data set contains few contaminant contigs or ProDeGe incorrectly assumes that the largest bin is the target bin. Some data sets contain a higher proportion of contamination than target sequence and ProDeGe''s performance can suffer under this condition. However, under all other conditions, ProDeGe demonstrates high speed, specificity and sensitivity (Figure 2). In addition, ProDeGe demonstrates better performance in overall classification when nucleotides are considered than when contigs are considered, illustrating that longer contigs are more accurately classified (Supplementary Table S1).Open in a separate windowFigure 2ProDeGe accuracy and performance scatterplots of 182 manually curated single amplified genomes (SAGs), where each symbol represents one SAG data set. (a) Accuracy shown by sensitivity (proportion of bases confirmed ‘Clean'') vs specificity (proportion of bases confirmed ‘Contaminant'') from the Endophyte and Microbial Dark Matter (MDM) data sets. Symbol size reflects input data set size in megabases. Most points cluster in the top right of the plot, showing ProDeGe''s high accuracy. Median and average overall results are shown in Supplementary Table S1. (b) ProDeGe completion time in central processing unit (CPU) core hours for the 182 SAGs. ProDeGe operates successfully at an average rate of 0.30 CPU core hours per megabase of sequence. Principal components analysis (PCA) of a 9-mer frequency matrix costs more computationally than PCA of a 5-mer frequency matrix used with blast-binning. The lack of known taxonomy for the MDM data sets prevents blast-binning, thus showing longer finishing times than the endophyte data sets, which have known taxonomy for use in blast-binning.All SAGs used in the evaluation of ProDeGe were assembled using SPAdes (Bankevich et al., 2012). In-house testing has shown that reads assembled with SPAdes from different strains or even slightly divergent species of the same genera may be combined into the same contig (Personal communications, KT and Robert Bowers). Ideally, the DNA in a well that gets sequenced belongs to a single cell. In the best case, contaminant sequences need to be at least from a different species to be recognized as such by the homology-based screening stage. In the absence of closely related sequenced organisms, contaminant sequences need to be at least from a different genus to be recognized as such by the composition-based screening stage (Supplementary Material). Thus, there is little risk of ProDeGe separating sequences from clonal populations or strains. We have found species- and genus-level contamination in MDA samples to be rare.To evaluate the quality of publicly available uncultured genomes, ProDeGe was used to screen 185 SAGs and 14 GMGs (Figure 1b). Compared with CheckM (Parks et al., 2014), a tool which calculates an estimate of genome sequence contamination using marker genes, ProDeGe generally marks a higher proportion of sequence as ‘Contaminant'' (Supplementary Table S2). This is because ProDeGe has been calibrated to perform at high specificity levels. The command line version of ProDeGe allows users to conduct their own calibration and specify a user-defined distance cutoff. Further, CheckM only outputs the proportion of contamination, but ProDeGe actually labels each contig as ‘Clean'' or ‘Contaminant'' during the process of automated removal.The web application for ProDeGe allows users to export clean and contaminant contigs, examine contig gene calls with their corresponding taxonomies, and discover contig clusters in the first three components of their k-dimensional space. Non-linear approaches for dimensionality reduction of k-mer vectors are gaining popularity (van der Maaten and Hinton, 2008), but we observed no systematic advantage of using t-Distributed Stochastic Neighbor Embedding over PCA (Supplementary Figure S2).ProDeGe is the first step towards establishing a standard for quality control of genomes from both cultured and uncultured microorganisms. It is valuable for preventing the dissemination of contaminated sequence data into public databases, avoiding resulting misleading analyses. The fully automated nature of the pipeline relieves scientists of hours of manual screening, producing reliably clean data sets and enabling the high-throughput screening of data sets for the first time. ProDeGe, therefore, represents a critical component in our toolkit during an era of next-generation DNA sequencing and cultivation-independent microbial genomics.  相似文献   
86.
87.

Objective

To assess the cost-effectiveness of six treatment strategies for patients diagnosed with recurrent Clostridium difficile infection (CDI) in Canada: 1. oral metronidazole; 2. oral vancomycin; 3.oral fidaxomicin; 4. fecal transplantation by enema; 5. fecal transplantation by nasogastric tube; and 6. fecal transplantation by colonoscopy.

Perspective

Public insurer for all hospital and physician services.

Setting

Ontario, Canada.

Methods

A decision analytic model was used to model costs and lifetime health effects of each strategy for a typical patient experiencing up to three recurrences, over 18 weeks. Recurrence data and utilities were obtained from published sources. Cost data was obtained from published sources and hospitals in Toronto, Canada. The willingness-to-pay threshold was $50,000/QALY gained.

Results

Fecal transplantation by colonoscopy dominated all other strategies in the base case, as it was less costly and more effective than all alternatives. After accounting for uncertainty in all model parameters, there was an 87% probability that fecal transplantation by colonoscopy was the most beneficial strategy. If colonoscopy was not available, fecal transplantation by enema was cost-effective at $1,708 per QALY gained, compared to metronidazole. In addition, fecal transplantation by enema was the preferred strategy if the probability of recurrence following this strategy was below 8.7%. If fecal transplantation by any means was unavailable, fidaxomicin was cost-effective at an additional cost of $25,968 per QALY gained, compared to metronidazole.

Conclusion

Fecal transplantation by colonoscopy (or enema, if colonoscopy is unavailable) is cost-effective for treating recurrent CDI in Canada. Where fecal transplantation is not available, fidaxomicin is also cost-effective.  相似文献   
88.
Wildfire greatly impacts the composition and quantity of organic carbon stocks within watersheds. Most methods used to measure the contributions of fire altered organic carbon–i.e. pyrogenic organic carbon (Py-OC) in natural samples are designed to quantify specific fractions such as black carbon or polyaromatic hydrocarbons. In contrast, the CuO oxidation procedure yields a variety of products derived from a variety of precursors, including both unaltered and thermally altered sources. Here, we test whether or not the benzene carboxylic acid and hydroxy benzoic acid (BCA) products obtained by CuO oxidation provide a robust indicator of Py-OC and compare them to non-Py-OC biomarkers of lignin. O and A horizons from microcosms were burned in the laboratory at varying levels of fire severity and subsequently incubated for 6 months. All soils were analyzed for total OC and N and were analyzed by CuO oxidation. All BCAs appeared to be preserved or created to some degree during burning while lignin phenols appeared to be altered or destroyed to varying extents dependent on fire severity. We found two specific CuO oxidation products, o-hydroxybenzoic acid (oBd) and 1,2,4-benzenetricarboxylic acid (BTC2) that responded strongly to burn severity and withstood degradation during post-burning microbial incubations. Interestingly, we found that benzene di- and tricarboxylic acids (BDC and BTC, respectively) were much more reactive than vanillyl phenols during the incubation as a possible result of physical protection of vanillyl phenols in the interior of char particles or CuO oxidation derived BCAs originating from biologically available classes of Py-OC. We found that the ability of these compounds to predict relative Py-OC content in burned samples improved when normalized by their respective BCA class (i.e. benzene monocarboxylic acids (BA) and BTC, respectively) and when BTC was normalized to total lignin yields (BTC:Lig). The major trends in BCAs imparted by burning persisted through a 6 month incubation suggesting that fire severity had first order control on BCA and lignin composition. Using original and published BCA data from soils, sediments, char, and interfering compounds we found that BTC:Lig and BTC2:BTC were able to distinguish Py-OC from compounds such as humic materials, tannins, etc. The BCAs released by the CuO oxidation procedure increase the functionality of this method in order to examine the relative contribution of Py-OC in geochemical samples.  相似文献   
89.
MHAA4549A is a human immunoglobulin G1 (IgG1) monoclonal antibody that binds to a highly conserved epitope on the stalk of influenza A hemagglutinin and blocks the hemagglutinin-mediated membrane fusion in the endosome, neutralizing all known human influenza A strains. Pharmacokinetics (PK) of MHAA4549A and its related antibodies were determined in DBA/2J and Balb-c mice at 5 mg/kg and in cynomolgus monkeys at 5 and 100 mg/kg as a single intravenous dose. Serum samples were analyzed for antibody concentrations using an ELISA and the PK was evaluated using WinNonlin software. Human PK profiles were projected based on the PK in monkeys using species-invariant time method. The human efficacious dose projection was based on in vivo nonclinical pharmacological active doses, exposure in mouse infection models and expected human PK. The PK profiles of MHAA4549A and its related antibody showed a linear bi-exponential disposition in mice and cynomolgus monkeys. In mice, clearance and half-life ranged from 5.77 to 9.98 mL/day/kg and 10.2 to 5.76 days, respectively. In cynomolgus monkeys, clearance and half-life ranged from 4.33 to 4.34 mL/day/kg and 11.3 to 11.9 days, respectively. The predicted clearance in humans was ~2.60 mL/day/kg. A single intravenous dose ranging from 15 to 45 mg/kg was predicted to achieve efficacious exposure in humans. In conclusion, the PK of MHAA4549A was as expected for a human IgG1 monoclonal antibody that lacks known endogenous host targets. The predicted clearance and projected efficacious doses in humans for MHAA4549A have been verified in a Phase 1 study and Phase 2a study, respectively.  相似文献   
90.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号