首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   77篇
  免费   13篇
  90篇
  2022年   1篇
  2021年   1篇
  2019年   1篇
  2018年   3篇
  2017年   1篇
  2015年   3篇
  2014年   3篇
  2013年   2篇
  2012年   5篇
  2011年   5篇
  2010年   2篇
  2008年   1篇
  2007年   5篇
  2006年   8篇
  2005年   2篇
  2004年   5篇
  2003年   7篇
  2002年   1篇
  2001年   4篇
  2000年   2篇
  1999年   2篇
  1998年   5篇
  1996年   1篇
  1995年   2篇
  1993年   2篇
  1992年   2篇
  1989年   1篇
  1987年   3篇
  1986年   2篇
  1984年   1篇
  1982年   1篇
  1980年   1篇
  1979年   2篇
  1977年   1篇
  1973年   1篇
  1972年   1篇
排序方式: 共有90条查询结果,搜索用时 15 毫秒
61.
In this paper, we compare the performance of six different feature selection methods for LC-MS-based proteomics and metabolomics biomarker discovery—t test, the Mann–Whitney–Wilcoxon test (mww test), nearest shrunken centroid (NSC), linear support vector machine–recursive features elimination (SVM-RFE), principal component discriminant analysis (PCDA), and partial least squares discriminant analysis (PLSDA)—using human urine and porcine cerebrospinal fluid samples that were spiked with a range of peptides at different concentration levels. The ideal feature selection method should select the complete list of discriminating features that are related to the spiked peptides without selecting unrelated features. Whereas many studies have to rely on classification error to judge the reliability of the selected biomarker candidates, we assessed the accuracy of selection directly from the list of spiked peptides. The feature selection methods were applied to data sets with different sample sizes and extents of sample class separation determined by the concentration level of spiked compounds. For each feature selection method and data set, the performance for selecting a set of features related to spiked compounds was assessed using the harmonic mean of the recall and the precision (f-score) and the geometric mean of the recall and the true negative rate (g-score). We conclude that the univariate t test and the mww test with multiple testing corrections are not applicable to data sets with small sample sizes (n = 6), but their performance improves markedly with increasing sample size up to a point (n > 12) at which they outperform the other methods. PCDA and PLSDA select small feature sets with high precision but miss many true positive features related to the spiked peptides. NSC strikes a reasonable compromise between recall and precision for all data sets independent of spiking level and number of samples. Linear SVM-RFE performs poorly for selecting features related to the spiked compounds, even though the classification error is relatively low.Biomarkers play an important role in advancing medical research through the early diagnosis of disease and prognosis of treatment interventions (1, 2). Biomarkers may be proteins, peptides, or metabolites, as well as mRNAs or other kinds of nucleic acids (e.g. microRNAs) whose levels change in relation to the stage of a given disease and which may be used to accurately assign the disease stage of a patient. The accurate selection of biomarker candidates is crucial, because it determines the outcome of further validation studies and the ultimate success of efforts to develop diagnostic and prognostic assays with high specificity and sensitivity. The success of biomarker discovery depends on several factors: consistent and reproducible phenotyping of the individuals from whom biological samples are obtained; the quality of the analytical methodology, which in turn determines the quality of the collected data; the accuracy of the computational methods used to extract quantitative and molecular identity information to define the biomarker candidates from raw analytical data; and finally the performance of the applied statistical methods in the selection of a limited list of compounds with the potential to discriminate between predefined classes of samples. De novo biomarker research consists of a biomarker discovery part and a biomarker validation part (3). Biomarker discovery uses analytical techniques that try to measure as many compounds as possible in a relatively low number of samples. The goal of subsequent data preprocessing and statistical analysis is to select a limited number of candidates, which are subsequently subjected to targeted analyses in large number of samples for validation.Advanced technology, such as high-performance liquid chromatography–mass spectrometry (LC-MS),1 is increasingly applied in biomarker discovery research. Such analyses detect tens of thousands of compounds, as well as background-related signals, in a single biological sample, generating enormous amounts of multivariate data. Data preprocessing workflows reduce data complexity considerably by trying to extract only the information related to compounds resulting in a quantitative feature matrix, in which rows and columns correspond to samples and extracted features, respectively, or vice versa. Features may also be related to data preprocessing artifacts, and the ratio of such erroneous features to compound-related features depends on the performance of the data preprocessing workflow (4). Preprocessed LC-MS data sets contain a large number of features relative to the sample size. These features are characterized by their m/z value and retention time, and in the ideal case they can be combined and linked to compound identities such as metabolites, peptides, and proteins. In LC-MS-based proteomics and metabolomics studies, sample analysis is so time consuming that it is practically impossible to increase the number of samples to a level that balances the number of features in a data set. Therefore, the success of biomarker discovery depends on powerful feature selection methods that can deal with a low sample size and a high number of features. Because of the unfavorable statistical situation and the risk of overfitting the data, it is ultimately pivotal to validate the selected biomarker candidates in a larger set of independent samples, preferably in a double-blinded fashion, using targeted analytical methods (1).Biomarker selection is often based on classification methods that are preceded by feature selection methods (filters) or which have built-in feature selection modules (wrappers and embedded methods) that can be used to select a list of compounds/peaks/features that provide the best classification performance for predefined sample groups (e.g. healthy versus diseased) (5). Classification methods are able to classify an unknown sample into a predefined sample class. Univariate feature selection methods such as filters (t test or Wilcoxon–Mann–Whitney tests) cannot be used for sample classification. Other classification methods such as the nearest shrunken centroid method have intrinsic feature selection ability, whereas other classification methods such as principal component discriminant analysis (PCDA) and partial least squares regression coupled with discriminant analysis (PLSDA) should be augmented with a feature selection method. There are classifiers having no feature selection option that perform the classification using all variables, such as support vector machines that use non-linear kernels (6). Classification methods without the ability to select features cannot be used for biomarker discovery, because these methods aim to classify samples into predefined classes but cannot identify the limited number of variables (features or compounds) that form the basis of the classification (6, 7). Different statistical methods with feature selection have been developed according to the complexity of the analyzed data, and these have been extensively reviewed (5, 6, 8, 9). Ways of optimizing such methods to improve sensitivity and specificity are a major topic in current biomarker discovery research and in the many “omics-related” research areas (6, 10, 11). Comparisons of classification methods with respect to their classification and learning performance have been initiated. Van der Walt et al. (12) focused on finding the most accurate classifiers for simulated data sets with sample sizes ranging from 20 to 100. Rubingh et al. (13) compared the influence of sample size in an LC-MS metabolomics data set on the performance of three different statistical validation tools: cross validation, jack-knifing model parameters, and a permutation test. That study concluded that for small sample sets, the outcome of these validation methods is influenced strongly by individual samples and therefore cannot be trusted, and the validation tool cannot be used to indicate problems due to sample size or the representativeness of sampling. This implies that reducing the dimensionality of the feature space is critical when approaching a classification problem in which the number of features exceeds the number of samples by a large margin. Dimensionality reduction retains a smaller set of features to bring the feature space in line with the sample size and thus allow the application of classification methods that perform with acceptable accuracy only when the sample size and the feature size are similar.In this study we compared different classification methods focusing on feature selection in two types of spiked LC-MS data sets that mimic the situation of a biomarker discovery study. Our results provide guidelines for researchers who will engage in biomarker discovery or other differential profiling “omics” studies with respect to sample size and selecting the most appropriate feature selection method for a given data set. We evaluated the following approaches: univariate t test and Mann–Whitney–Wilcoxon test (mww test) with multiple testing correction (14), nearest shrunken centroid (NSC) (15, 16), support vector machine–recursive features elimination (SVM-RFE) (17), PLSDA (18), and PCDA (19). PCDA and PLSDA were combined with the rank-product as a feature selection criterion (20). These methods were evaluated with data sets having three characteristics: different biological background, varying sample size, and varying within- and between-class variability of the added compounds. Data were acquired via LC-MS from human urine and porcine cerebrospinal fluid (CSF) samples that were spiked with a set of known peptides (true positives) at different concentration levels. These samples were then combined in two classes containing peptides spiked at low and high concentration levels. The performance of the classification methods with feature selection was measured based on their ability to select features that were related to the spiked peptides. Because true positives were known in our data set, we compared performance based on the f-score (the harmonic mean of precision and recall) and the g-score (the geometric mean of accuracy).  相似文献   
62.
Insects are major conduits of resources moving from aquatic to terrestrial systems. While the ecological impacts of insect subsidies are well documented, the underlying mechanisms by which these resources change recipient ecosystems remain poorly understood. Most subsidy inputs enter terrestrial systems as detritus; thus, soil microbes will likely influence the processing of insect subsidies, with implications for plant community composition and net primary productivity (NPP). In a subarctic ecosystem near Lake Mývatn, Iceland where midge (Diptera: Chironomidae) deposition to land is high, we investigated how insect subsidies affected litter processing and microbial communities. We also evaluated how those belowground effects related to changes in inorganic nitrogen, plant composition and NPP. We simulated subsidies by adding midge carcasses to 1-m2 heathland plots, where we measured effects on decomposition rates and the plant community. We then studied how fertilization treatments (control, KNO3 and midge-carcass addition) affected graminoid biomass and inorganic nitrogen in greenhouse experiments. Lastly, we conducted a soil-incubation study with a phospholipid fatty acid analysis (PLFA) to examine how midge addition to heathland soils affected microbial respiration, biomass and composition. We found that midge addition to heathland soils increased litter decomposition and graminoid plant cover by 2.6× and 2×, respectively. Greenhouse experiments revealed similar patterns, with midge carcasses increasing graminoid biomass by at least 2× and NH4+ concentrations by 7×. Our soil-incubation study found that midge carcasses elevated microbial respiration by 64%, microbial biomass by 43% and shifted microbial functional composition. Our findings indicate that insect subsidies can stimulate soil microbial communities and litter decomposition in subarctic heathlands, leading to increased NPP and changes in plant community composition.  相似文献   
63.
Warp2D is a novel time alignment approach, which uses the overlapping peak volume of the reference and sample peak lists to correct misleading peak shifts. Here, we present an easy-to-use web interface for high-throughput Warp2D batch processing time alignment service using the Dutch Life Science Grid, reducing processing time from days to hours. This service provides the warping function, the sample chromatogram peak list with adjusted retention times and normalized quality scores based on the sum of overlapping peak volume of all peaks. Heat maps before and after time alignment are created from the arithmetic mean of the sum of overlapping peak area rearranged with hierarchical clustering, allowing the quality control of the time alignment procedure. Taverna workflow and command line tool are provided for remote processing of local user data. AVAILABILITY: online data processing service is available at http://www.nbpp.nl/warp2d.html. Taverna workflow is available at myExperiment with title '2D Time Alignment-Webservice and Workflow' at http://www.myexperiment.org/workflows/1283.html. Command line tool is available at http://www.nbpp.nl/Warp2D_commandline.zip. CONTACT: p.l.horvatovich@rug.nl SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.  相似文献   
64.
BACKGROUND: The Onchocerciasis Control Program (OCP) in West Africa has been closed down at the end of 2002. All subsequent control will be transferred to the participating countries and will almost entirely be based on periodic mass treatment with ivermectin. This makes the question whether elimination of infection or eradication of onchocerciasis can be achieved using this strategy of critical importance. This study was undertaken to explore this issue. METHODS: An empirical approach was adopted in which a comprehensive analysis was undertaken of available data on the impact of more than a decade of ivermectin treatment on onchocerciasis infection and transmission. Relevant entomological and epidemiological data from 14 river basins in the OCP and one basin in Cameroon were reviewed. Areas were distinguished by frequency of treatment (6-monthly or annually), endemicity level and additional control measures such as vector control. Assessment of results were in terms of epidemiological and entomological parameters, and as a measure of inputs, therapeutic and geographical coverage rates were used. RESULTS: In all of the river basins studied, ivermectin treatment sharply reduced prevalence and intensity of infection. Significant transmission, however, is still ongoing in some basins after 10-12 years of ivermectin treatment. In other basins, transmission may have been interrupted, but this needs to be confirmed by in-depth evaluations. In one mesoendemic basin, where 20 rounds of four-monthly treatment reduced prevalence of infection to levels as low as 2-3%, there was significant recrudescence of infection within a few years after interruption of treatment. CONCLUSIONS: Ivermectin treatment has been very successful in eliminating onchocerciasis as a public health problem. However, the results presented in this paper make it almost certain that repeated ivermectin mass treatment will not lead to the elimination of transmission of onchocerciasis from West Africa. Data on 6-monthly treatments are not sufficient to draw definitive conclusions.  相似文献   
65.
66.
The effects of tissue transglutaminase on the water-soluble proteins in bovine lens homogenates are described. Addition of liver transglutaminase and Ca2+ to calf lens homogenates resulted not only in the appearance of 50- and 57-kDa dimers, but also in a decrease in the amount of βB1 crystallin and the almost complete disappearance of βB3 and βA3. This is not the result of Ca2+-induced proteolysis, since histamine completely inhibits this phenomenon. It may be concluded that these polypeptides are involved in β-crystallin crosslinking by transglutaminase. This notion was confirmed by using βB1- and βBp-specific antisera. Both sera reacted with the 57-kDa dimer; the βBp-specific antiserum also reacted with the 50-kDa dimer. No reaction in the region 50–57 kDa was detectable when EDTA was used instead of Ca2+. Using reconstituted mixtures of βB1- and βBp-crystallin chains, and N-terminally truncated derivatives thereof, it was shown that in the βB1/βBp dimer, glutamine residue -9 of βBp crosslinks to one of the lysine residues in the N-terminal extension of βB1.  相似文献   
67.
Anti-crystallin autoantibodies have often been demonstrated in the serum of healthy persons and, especially, patients with cataract. In no case, however, have the specific crystallin subunits been identified against which such antibodies are directed. This information would be of particular interest in view of the recent finding that several crystallin subunits occur constitutively outside the lens. To fill this gap, we analysed the sera of 15 patients with mature cataract by means of 1- and 2-dimensional immunoblotting. The circulating antibodies turned out to be directed against several - and -crystallin subunits. The types of subunits and the intensities of the responses varied considerably between patients. No or only occasional and very weak reactions were observed against the A-, B- and B2-crystallin subunits. These are in fact the only crystallins at present known to occur outside the lens in mammals. Our findings thus indicate that anti-crystallin autoantibodies are specifically directed against those crystallins that appear to be lens-restricted, while immunological tolerance would exist for the extra-lenticularly occurring crystallins.  相似文献   
68.
Antisera raised against galectin-1 exhibit crossreactivities with other galectins or related molecules. In order to overcome this problem, a monoclonal antibody to human brain galectin-1 was obtained by selecting clones without reactivity toward galectin-3. This mAb specifically bound galectin-1 of various animal origins but neither galectin-2 nor galectin-3. Western-blotting analysis of soluble human brain extracts after 2D gel electrophoresis revealed only the two most acidic isoforms of galectin-1. The ability of this mAb to bind galectin-1/asialofetuin complexes indicates that its epitope is not localized in the carbohydrate recognition domain of galectin-1. This particularity induces with efficiency its monospecificity.   相似文献   
69.
Monokaryotic mycelia of the homobasidiomycete Coprinus cinereus form asexual spores (oidia) constitutively in abundant numbers. Mycelia with mutations in both mating type loci (Amut Bmut homokaryons) also produce copious oidia but only when exposed to blue light. We used such an Amut Bmut homokaryon to define environmental and inherent factors that influence the light-induced oidiation process. We show that the Amut function causes repression of oidiation in the dark and that light overrides this effect. Similarly, compatible genes from different haplotypes of the A mating type locus repress sporulation in the dark and not in the light. Compatible products of the B mating type locus reduce the outcome of light on A-mediated repression but the mutated B function present in the Amut Bmut homokaryons is not effective. In dikaryons, the coordinated regulation of asexual sporulation by compatible A and B mating type genes results in moderate oidia production in light. Copyright 1998 Academic Press.  相似文献   
70.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号