首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   19647篇
  免费   2005篇
  国内免费   11篇
  2023年   79篇
  2022年   157篇
  2021年   338篇
  2020年   223篇
  2019年   247篇
  2018年   304篇
  2017年   329篇
  2016年   447篇
  2015年   794篇
  2014年   875篇
  2013年   972篇
  2012年   1372篇
  2011年   1317篇
  2010年   895篇
  2009年   815篇
  2008年   1112篇
  2007年   1216篇
  2006年   968篇
  2005年   1020篇
  2004年   1029篇
  2003年   970篇
  2002年   923篇
  2001年   323篇
  2000年   322篇
  1999年   285篇
  1998年   263篇
  1997年   173篇
  1996年   134篇
  1995年   130篇
  1994年   150篇
  1993年   127篇
  1992年   201篇
  1991年   198篇
  1990年   158篇
  1989年   164篇
  1988年   157篇
  1987年   158篇
  1986年   144篇
  1985年   166篇
  1984年   160篇
  1983年   120篇
  1982年   93篇
  1981年   110篇
  1980年   92篇
  1979年   95篇
  1978年   99篇
  1977年   70篇
  1976年   84篇
  1974年   90篇
  1971年   74篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
861.
We evaluated a neural network model for prediction of glucose in critically ill trauma and post-operative cardiothoracic surgical patients. A prospective, feasibility trial evaluating a continuous glucose-monitoring device was performed. After institutional review board approval, clinical data from all consenting surgical intensive care unit patients were converted to an electronic format using novel software. This data was utilized to develop and train a neural network model for real-time prediction of serum glucose concentration implementing a prediction horizon of 75 minutes. Glycemic data from 19 patients were used to “train” the neural network model. Subsequent real-time simulated testing was performed in 5 patients to whom the neural network model was naive. Performance of the model was evaluated by calculating the mean absolute difference percent (MAD%), Clarke Error Grid Analysis, and calculation of the percent of hypoglycemic (≤70 mg/dL), normoglycemic (>70 and <150 mg/dL), and hyperglycemic (≥150 mg/dL) values accurately predicted by the model; 9,405 data points were analyzed. The models successfully predicted trends in glucose in the 5 test patients. Clark Error Grid Analysis indicated that 100.0% of predictions were clinically acceptable with 87.3% and 12.7% of predicted values falling within regions A and B of the error grid respectively. Overall model error (MAD%) was 9.0% with respect to actual continuous glucose modeling data. Our model successfully predicted 96.7% and 53.6% of the normo- and hyperglycemic values respectively. No hypoglycemic events occurred in these patients. Use of neural network models for real-time prediction of glucose in the surgical intensive care unit setting offers healthcare providers potentially useful information which could facilitate optimization of glycemic control, patient safety, and improved care. Similar models can be implemented across a wider scale of biomedical variables to offer real-time optimization, training, and adaptation that increase predictive accuracy and performance of therapies.  相似文献   
862.

Background

The critically ill can have persistent dysglycemia during the “subacute” recovery phase of their illness because of altered gene expression; it is also not uncommon for these patients to receive continuous enteral nutrition during this time. The optimal short-acting subcutaneous insulin therapy that should be used in this clinical scenario, however, is unknown. Our aim was to conduct a qualitative numerical study of the glucose-insulin dynamics within this patient population to answer the above question. This analysis may help clinicians design a relevant clinical trial.

Methods

Eight virtual patients with stress hyperglycemia were simulated by means of a mathematical model. Each virtual patient had a different combination of insulin resistance and insulin deficiency that defined their unique stress hyperglycemia state; the rate of gluconeogenesis was also doubled. The patients received 25 injections of subcutaneous regular or Lispro insulin (0-6 U) with 3 rates of continuous nutrition. The main outcome measurements were the change in mean glucose concentration, the change in glucose variability, and hypoglycemic episodes. These end points were interpreted by how the ultradian oscillations of glucose concentration were affected by each insulin preparation.

Results

Subcutaneous regular insulin lowered both mean glucose concentrations and glucose variability in a linear fashion. No hypoglycemic episodes were noted. Although subcutaneous Lispro insulin lowered mean glucose concentrations, glucose variability increased in a nonlinear fashion. In patients with high insulin resistance and nutrition at goal, “rebound hyperglycemia” was noted after the insulin analog was rapidly metabolized. When the nutritional source was removed, hypoglycemia tended to occur at higher Lispro insulin doses. Finally, patients with severe insulin resistance seemed the most sensitive to insulin concentration changes.

Conclusions

Subcutaneous regular insulin consistently lowered mean glucose concentrations and glucose variability; its linear dose-response curve rendered the preparation better suited for a sliding-scale protocol. The longer duration of action of subcutaneous regular insulin resulted in better glycemic-control metrics for patients who were continuously postprandial. Clinical trials are needed to examine whether these numerical results represent the glucose-insulin dynamics that occur in intensive care units; if present, their clinical effects should be evaluated.
  相似文献   
863.
Targeted therapies for mutant BRAF metastatic melanoma are effective but not curative due to acquisition of resistance. PI3K signaling is a common mediator of therapy resistance in melanoma; thus, the need for effective PI3K inhibitors is critical. However, testing PI3K inhibitors in adherent cultures is not always reflective of their potential in vivo. To emphasize this, we compared PI3K inhibitors of different specificity in two‐ and three‐dimensional (2D, 3D) melanoma models and show that drug response predictions gain from evaluation using 3D models. Our results in 3D demonstrate the anti‐invasive potential of PI3K inhibitors and that drugs such as PX‐866 have beneficial activity in physiological models alone and when combined with BRAF inhibition. These assays finally help highlight pathway effectors that could be involved in drug response in different environments (e.g. p4E‐BP1). Our findings show the advantages of 3D melanoma models to enhance our understanding of PI3K inhibitors.  相似文献   
864.
Perennial, cellulosic bioenergy crops represent a risky investment. The potential for adoption of these crops depends not only on mean net returns, but also on the associated probability distributions and on the risk preferences of farmers. Using 6‐year observed crop yield data from highly productive and marginally productive sites in the southern Great Lakes region and assuming risk neutrality, we calculate expected breakeven biomass yields and prices compared to corn (Zea mays L.) as a benchmark. Next we develop Monte Carlo budget simulations based on stochastic crop prices and yields. The crop yield simulations decompose yield risk into three components: crop establishment survival, time to maturity, and mature yield variability. Results reveal that corn with harvest of grain and 38% of stover (as cellulosic bioenergy feedstock) is both the most profitable and the least risky investment option. It dominates all perennial systems considered across a wide range of farmer risk preferences. Although not currently attractive for profit‐oriented farmers who are risk neutral or risk averse, perennial bioenergy crops have a higher potential to successfully compete with corn under marginal crop production conditions.  相似文献   
865.
Melatonin is a natural mammalian hormone that plays an important role in regulating the circadian cycle in humans. It is a clinically effective drug exhibiting positive effects as a sleep aid and a powerful antioxidant used as a dietary supplement. Commercial melatonin production is predominantly performed by complex chemical synthesis. In this study, we demonstrate microbial production of melatonin and related compounds, such as serotonin and N‐acetylserotonin. We generated Saccharomyces cerevisiae strains that comprise heterologous genes encoding one or more variants of an L‐tryptophan hydroxylase, a 5‐hydroxy‐L‐tryptophan decarboxylase, a serotonin acetyltransferase, an acetylserotonin O‐methyltransferase, and means for providing the cofactor tetrahydrobiopterin via heterologous biosynthesis and recycling pathways. We thereby achieved de novo melatonin biosynthesis from glucose. We furthermore accomplished increased product titers by altering expression levels of selected pathway enzymes and boosting co‐factor supply. The final yeast strain produced melatonin at a titer of 14.50 ± 0.57 mg L?1 in a 76h fermentation using simulated fed‐batch medium with glucose as sole carbon source. Our study lays the basis for further developing a yeast cell factory for biological production of melatonin.  相似文献   
866.
Single amplified genomes and genomes assembled from metagenomes have enabled the exploration of uncultured microorganisms at an unprecedented scale. However, both these types of products are plagued by contamination. Since these genomes are now being generated in a high-throughput manner and sequences from them are propagating into public databases to drive novel scientific discoveries, rigorous quality controls and decontamination protocols are urgently needed. Here, we present ProDeGe (Protocol for fully automated Decontamination of Genomes), the first computational protocol for fully automated decontamination of draft genomes. ProDeGe classifies sequences into two classes—clean and contaminant—using a combination of homology and feature-based methodologies. On average, 84% of sequence from the non-target organism is removed from the data set (specificity) and 84% of the sequence from the target organism is retained (sensitivity). The procedure operates successfully at a rate of ~0.30 CPU core hours per megabase of sequence and can be applied to any type of genome sequence.Recent technological advancements have enabled the large-scale sampling of genomes from uncultured microbial taxa, through the high-throughput sequencing of single amplified genomes (SAGs; Rinke et al., 2013; Swan et al., 2013) and assembly and binning of genomes from metagenomes (GMGs; Cuvelier et al., 2010; Sharon and Banfield, 2013). The importance of these products in assessing community structure and function has been established beyond doubt (Kalisky and Quake, 2011). Multiple Displacement Amplification (MDA) and sequencing of single cells has been immensely successful in capturing rare and novel phyla, generating valuable references for phylogenetic anchoring. However, efforts to conduct MDA and sequencing in a high-throughput manner have been heavily impaired by contamination from DNA introduced by the environmental sample, as well as introduced during the MDA or sequencing process (Woyke et al., 2011; Engel et al., 2014; Field et al., 2014). Similarly, metagenome binning and assembly often carries various errors and artifacts depending on the methods used (Nielsen et al., 2014). Even cultured isolate genomes have been shown to lack immunity to contamination with other species (Parks et al., 2014; Mukherjee et al., 2015). As sequencing of these genome product types rapidly increases, contaminant sequences are finding their way into public databases as reference sequences. It is therefore extremely important to define standardized and automated protocols for quality control and decontamination, which would go a long way towards establishing quality standards for all microbial genome product types.Current procedures for decontamination and quality control of genome sequences in single cells and metagenome bins are heavily manual and can consume hours/megabase when performed by expert biologists. Supervised decontamination typically involves homology-based inspection of ribosomal RNA sequences and protein coding genes, as well as visual analysis of k-mer frequency plots and guanine–cytosine content (Clingenpeel, 2015). Manual decontamination is also possible through the software SmashCell (Harrington et al., 2010), which contains a tool for visual identification of contaminants from a self-organizing map and corresponding U-matrix. Another existing software tool, DeconSeq (Schmieder and Edwards, 2011), automatically removes contaminant sequences, however, the contaminant databases are required input. The former lacks automation, whereas the latter requires prior knowledge of contaminants, rendering both applications impractical for high-throughput decontamination.Here, we introduce ProDeGe, the first fully automated computational protocol for decontamination of genomes. ProDeGe uses a combination of homology-based and sequence composition-based approaches to separate contaminant sequences from the target genome draft. It has been pre-calibrated to discard at least 84% of the contaminant sequence, which results in retention of a median 84% of the target sequence. The standalone software is freely available at http://prodege.jgi-psf.org//downloads/src and can be run on any system that has Perl, R (R Core Team, 2014), Prodigal (Hyatt et al., 2010) and NCBI Blast (Camacho et al., 2009) installed. A graphical viewer allowing further exploration of data sets and exporting of contigs accompanies the web application for ProDeGe at http://prodege.jgi-psf.org, which is open to the wider scientific community as a decontamination service (Supplementary Figure S1).The assembly and corresponding NCBI taxonomy of the data set to be decontaminated are required inputs to ProDeGe (Figure 1a). Contigs are annotated with genes following which, eukaryotic contamination is removed based on homology of genes at the nucleotide level using the eukaryotic subset of NCBI''s Nucleotide database as the reference. For detecting prokaryotic contamination, a curated database of reference contigs from the set of high-quality genomes within the Integrated Microbial Genomes (IMG; Markowitz et al., 2014) system is used as the reference. This ensures that errors in public reference databases due to poor quality of sequencing, assembly and annotation do not negatively impact the decontamination process. Contigs determined as belonging to the target organism based on nucleotide level homology to sequences in the above database are defined as ‘Clean'', whereas those aligned to other organisms are defined as ‘Contaminant''. Contigs whose origin cannot be determined based on alignment are classified as ‘Undecided''. Classified clean and contaminated contigs are used to calibrate the separation in the subsequent 5-mer based binning module, which classifies undecided contigs as ‘Clean'' or ‘Contaminant'' using principal components analysis (PCA) of 5-mer frequencies. This parameter can also be specified by the user. When data sets do not have taxonomy deeper than phylum level, or a single confident taxonomic bin cannot be detected using sequence alignment, solely 9-mer based binning is used due to more accurate overall classification. In the absence of a user-defined cutoff, a pre-calibrated cutoff for 80% or more specificity separates the clean contigs from contaminated sequences in the resulting PCA of the 9-mer frequency matrix. Details on ProDeGe''s custom database, evaluation of the performance of the system and exploration of the parameter space to calibrate ProDeGe for a high accurate classification rate are provided in the Supplementary Material.Open in a separate windowFigure 1(a) Schematic overview of the ProDeGe engine. (b) Features of data sets used to validate ProDeGe: SAGs from the Arabidopsis endophyte sequencing project, MDM project, public data sets found in IMG but not sequenced at the JGI, as well as genomes from metagenomes. All the data and results can be found in Supplementary Table S3.The performance of ProDeGe was evaluated using 182 manually screened SAGs (Figure 1b,Supplementary Table S1) from two studies whose data sets are publicly available within the IMG system: genomes of 107 SAGs from an Arabidopsis endophyte sequencing project and 75 SAGs from the Microbial Dark Matter (MDM) project* (only 75/201 SAGs from the MDM project had 1:1 mapping between contigs in the unscreened and the manually screened versions, hence these were used; Rinke et al., 2013). Manual curation of these SAGs demonstrated that the use of ProDeGe prevented 5311 potentially contaminated contigs in these data sets from entering public databases. Figure 2a demonstrates the sensitivity vs specificity plot of ProDeGe results for the above data sets. Most of the data points in Figure 2a cluster in the top right of the box reflecting a median retention of 89% of the clean sequence (sensitivity) and a median rejection of 100% of the sequence of contaminant origin (specificity). In addition, on average, 84% of the bases of a data set are accurately classified. ProDeGe performs best when the target organism has sequenced homologs at the class level or deeper in its high-quality prokaryotic nucleotide reference database. If the target organism''s taxonomy is unknown or not deeper than domain level, or there are few contigs with taxonomic assignments, a target bin cannot be assessed and thus ProDeGe removes contaminant contigs using sequence composition only. The few samples in Figure 2a that demonstrate a higher rate of false positives (lower specificity) and/or reduced sensitivity typically occur when the data set contains few contaminant contigs or ProDeGe incorrectly assumes that the largest bin is the target bin. Some data sets contain a higher proportion of contamination than target sequence and ProDeGe''s performance can suffer under this condition. However, under all other conditions, ProDeGe demonstrates high speed, specificity and sensitivity (Figure 2). In addition, ProDeGe demonstrates better performance in overall classification when nucleotides are considered than when contigs are considered, illustrating that longer contigs are more accurately classified (Supplementary Table S1).Open in a separate windowFigure 2ProDeGe accuracy and performance scatterplots of 182 manually curated single amplified genomes (SAGs), where each symbol represents one SAG data set. (a) Accuracy shown by sensitivity (proportion of bases confirmed ‘Clean'') vs specificity (proportion of bases confirmed ‘Contaminant'') from the Endophyte and Microbial Dark Matter (MDM) data sets. Symbol size reflects input data set size in megabases. Most points cluster in the top right of the plot, showing ProDeGe''s high accuracy. Median and average overall results are shown in Supplementary Table S1. (b) ProDeGe completion time in central processing unit (CPU) core hours for the 182 SAGs. ProDeGe operates successfully at an average rate of 0.30 CPU core hours per megabase of sequence. Principal components analysis (PCA) of a 9-mer frequency matrix costs more computationally than PCA of a 5-mer frequency matrix used with blast-binning. The lack of known taxonomy for the MDM data sets prevents blast-binning, thus showing longer finishing times than the endophyte data sets, which have known taxonomy for use in blast-binning.All SAGs used in the evaluation of ProDeGe were assembled using SPAdes (Bankevich et al., 2012). In-house testing has shown that reads assembled with SPAdes from different strains or even slightly divergent species of the same genera may be combined into the same contig (Personal communications, KT and Robert Bowers). Ideally, the DNA in a well that gets sequenced belongs to a single cell. In the best case, contaminant sequences need to be at least from a different species to be recognized as such by the homology-based screening stage. In the absence of closely related sequenced organisms, contaminant sequences need to be at least from a different genus to be recognized as such by the composition-based screening stage (Supplementary Material). Thus, there is little risk of ProDeGe separating sequences from clonal populations or strains. We have found species- and genus-level contamination in MDA samples to be rare.To evaluate the quality of publicly available uncultured genomes, ProDeGe was used to screen 185 SAGs and 14 GMGs (Figure 1b). Compared with CheckM (Parks et al., 2014), a tool which calculates an estimate of genome sequence contamination using marker genes, ProDeGe generally marks a higher proportion of sequence as ‘Contaminant'' (Supplementary Table S2). This is because ProDeGe has been calibrated to perform at high specificity levels. The command line version of ProDeGe allows users to conduct their own calibration and specify a user-defined distance cutoff. Further, CheckM only outputs the proportion of contamination, but ProDeGe actually labels each contig as ‘Clean'' or ‘Contaminant'' during the process of automated removal.The web application for ProDeGe allows users to export clean and contaminant contigs, examine contig gene calls with their corresponding taxonomies, and discover contig clusters in the first three components of their k-dimensional space. Non-linear approaches for dimensionality reduction of k-mer vectors are gaining popularity (van der Maaten and Hinton, 2008), but we observed no systematic advantage of using t-Distributed Stochastic Neighbor Embedding over PCA (Supplementary Figure S2).ProDeGe is the first step towards establishing a standard for quality control of genomes from both cultured and uncultured microorganisms. It is valuable for preventing the dissemination of contaminated sequence data into public databases, avoiding resulting misleading analyses. The fully automated nature of the pipeline relieves scientists of hours of manual screening, producing reliably clean data sets and enabling the high-throughput screening of data sets for the first time. ProDeGe, therefore, represents a critical component in our toolkit during an era of next-generation DNA sequencing and cultivation-independent microbial genomics.  相似文献   
867.
Investigating animal energy expenditure across space and time may provide more detailed insight into how animals interact with their environment. This insight should improve our understanding of how changes in the environment affect animal energy budgets and is particularly relevant for animals living near or within human altered environments where habitat change can occur rapidly. We modeled fisher (Pekania pennanti) energy expenditure within their home ranges and investigated the potential environmental and spatial drivers of the predicted spatial patterns. As a proxy for energy expenditure we used overall dynamic body acceleration (ODBA) that we quantified from tri-axial accelerometer data during the active phases of 12 individuals. We used a generalized additive model (GAM) to investigate the spatial distribution of ODBA by associating the acceleration data to the animals'' GPS-recorded locations. We related the spatial patterns of ODBA to the utilization distributions and habitat suitability estimates across individuals. The ODBA of fishers appears highly structured in space and was related to individual utilization distribution and habitat suitability estimates. However, we were not able to predict ODBA using the environmental data we selected. Our results suggest an unexpected complexity in the space use of animals that was only captured partially by re-location data-based concepts of home range and habitat suitability. We suggest future studies recognize the limits of ODBA that arise from the fact that acceleration is often collected at much finer spatio-temporal scales than the environmental data and that ODBA lacks a behavioral correspondence. Overcoming these limits would improve the interpretation of energy expenditure in relation to the environment.  相似文献   
868.
869.

Objective

Evaluate the reliability and validity of the Youth Self-Report (YSR) as a screening tool for mental health problems among young people vulnerable to HIV in Ethiopia.

Design

A cross-sectional assessment of young people currently receiving social services.

Methods

Young people age 15–18 participated in a study where a translated and adapted version of the YSR was administered by trained nurses, followed by an assessment by Ethiopian psychiatrists. Internal reliability of YSR syndrome scales were assessed using Chronbach''s alpha. Test-retest reliability was assessed through repeating the YSR one month later. To assess validity, analysis of the sensitivity and specificity of the YSR compared to the psychiatrist assessment was conducted.

Results

Across the eight syndrome scales, the YSR best measured the diagnosis of anxiety/depression and social problems among young women, and attention problems among young men. Among individual YSR syndrome scales, internal reliability ranged from unacceptable (Chronback’s alpha = 0.11, rule-breaking behavior among young women) to good (α≥0.71, anxiety/depression among young women). Anxiety/depression scores of ≥8.5 among young women also had good sensitivity (0.833) and specificity (0.754) to predict a true diagnosis. The YSR syndrome scales for social problems among young women and attention problems among young men also had fair consistency and validity measurements. Most YSR scores had significant positive correlations between baseline and post-one month administration. Measures of reliability and validity for most other YSR syndrome scales were fair to poor.

Conclusions

The adapted, personally administered, Amharic version of the YSR has sufficient reliability and validity in identifying young vulnerable women with anxiety/depression and/or social problems, and young men with attention problems; which were the most common mental health disorders observed by psychiatrists among the migrant populations in this study. Further assessment of the applicability of the YSR among vulnerable young people for less common disorders in Ethiopia is needed.  相似文献   
870.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号