首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   55篇
  免费   12篇
  2020年   1篇
  2018年   1篇
  2016年   2篇
  2015年   3篇
  2014年   6篇
  2013年   3篇
  2012年   9篇
  2011年   4篇
  2010年   7篇
  2009年   4篇
  2008年   5篇
  2007年   3篇
  2005年   2篇
  2004年   3篇
  2003年   2篇
  2002年   1篇
  1999年   1篇
  1998年   1篇
  1993年   1篇
  1991年   1篇
  1990年   4篇
  1989年   1篇
  1988年   1篇
  1987年   1篇
排序方式: 共有67条查询结果,搜索用时 31 毫秒
31.
Animals and plants are increasingly suffering from diseases caused by fungi and oomycetes. These emerging pathogens are now recognized as a global threat to biodiversity and food security. Among oomycetes, Saprolegnia species cause significant declines in fish and amphibian populations. Fish eggs have an immature adaptive immune system and depend on nonspecific innate defences to ward off pathogens. Here, meta-taxonomic analyses revealed that Atlantic salmon eggs are home to diverse fungal, oomycete and bacterial communities. Although virulent Saprolegnia isolates were found in all salmon egg samples, a low incidence of Saprolegniosis was strongly correlated with a high richness and abundance of specific commensal Actinobacteria, with the genus Frondihabitans (Microbacteriaceae) effectively inhibiting attachment of Saprolegniato salmon eggs. These results highlight that fundamental insights into microbial landscapes of fish eggs may provide new sustainable means to mitigate emerging diseases.  相似文献   
32.
Inferring metabolic networks from metabolite concentration data is a central topic in systems biology. Mathematical techniques to extract information about the network from data have been proposed in the literature. This paper presents a critical assessment of the feasibility of reverse engineering of metabolic networks, illustrated with a selection of methods. Appropriate data are simulated to study the performance of four representative methods. An overview of sampling and measurement methods currently in use for generating time-resolved metabolomics data is given and contrasted with the needs of the discussed reverse engineering methods. The results of this assessment show that if full inference of a real-world metabolic network is the goal there is a large discrepancy between the requirements of reverse engineering of metabolic networks and contemporary measurement practice. Recommendations for improved time-resolved experimental designs are given.  相似文献   
33.

Background

There is ample evidence from observational prospective studies that maternal depression or anxiety during pregnancy is a risk factor for adverse psychosocial outcomes in the offspring. However, to date no previous study has demonstrated that treatment of depressive or anxious symptoms in pregnancy actually could prevent psychosocial problems in children. Preventing psychosocial problems in children will eventually bring down the huge public health burden of mental disease. The main objective of this study is to assess the effects of cognitive behavioural therapy in pregnant women with symptoms of anxiety or depression on the child's development as well as behavioural and emotional problems. In addition, we aim to study its effects on the child's development, maternal mental health, and neonatal outcomes, as well as the cost-effectiveness of cognitive behavioural therapy relative to usual care.

Methods/design

We will include 300 women with at least moderate levels of anxiety or depression at the end of the first trimester of pregnancy. By including 300 women we will be able to demonstrate effect sizes of 0.35 or over on the total problems scale of the child behavioural checklist 1.5-5 with alpha 5% and power (1-beta) 80%.Women in the intervention arm are offered 10-14 individual cognitive behavioural therapy sessions, 6-10 sessions during pregnancy and 4-8 sessions after delivery (once a week). Women in the control group receive care as usual.Primary outcome is behavioural/emotional problems at 1.5 years of age as assessed by the total problems scale of the child behaviour checklist 1.5 - 5 years.Secondary outcomes will be mental, psychomotor and behavioural development of the child at age 18 months according to the Bayley scales, maternal anxiety and depression during pregnancy and postpartum, and neonatal outcomes such as birth weight, gestational age and Apgar score, health care consumption and general health status (economic evaluation).

Trial Registration

Netherlands Trial Register (NTR): NTR2242
  相似文献   
34.
Many metabolomics studies aim to find ‘biomarkers’: sets of molecules that are consistently elevated or decreased upon experimental manipulation. Biological effects, however, often manifest themselves along a continuum of individual differences between the biological replicates in the experiment. Such differences are overlooked or even diminished by methods in standard use for metabolomics, although they may contain a wealth of information on the experiment. Properly understanding individual differences is crucial for generating knowledge in fields like personalised medicine, evolution and ecology. We propose to use simultaneous component analysis with individual differences constraints (SCA-IND), a data analysis method from psychology that focuses on these differences. This method constructs axes along the natural biochemical differences between biological replicates, comparable to principal components. The model may shed light on changes in the individual differences between experimental groups, but also on whether these differences correspond to, e.g., responders and non-responders or to distinct chemotypes. Moreover, SCA-IND reveals the individuals that respond most to a manipulation and are best suited for further experimentation. The method is illustrated by the analysis of individual differences in the metabolic response of cabbage plants to herbivory. The model reveals individual differences in the response to shoot herbivory, where two ‘response chemotypes’ may be identified. In the response to root herbivory the model shows that individual plants differ strongly in response dynamics. Thereby SCA-IND provides a hitherto unavailable view on the chemical diversity of the induced plant response, that greatly increases understanding of the system.  相似文献   
35.
In functional genomics it is more rule than exception that experimental designs are used to generate the data. The samples of the resulting data sets are thus organized according to this design and for each sample many biochemical compounds are measured, e.g. typically thousands of gene-expressions or hundreds of metabolites. This results in high-dimensional data sets with an underlying experimental design. Several methods have recently become available for analyzing such data while utilizing the underlying design. We review these methods by putting them in a unifying and general framework to facilitate understanding the (dis-)similarities between the methods. The biological question dictates which method to use and the framework allows for building new methods to accommodate a range of such biological questions. The framework is built on well known fixed-effect ANOVA models and subsequent dimension reduction. We present the framework both in matrix algebra as well as in more insightful geometrical terms. We show the workings of the different special cases of our framework with a real-life metabolomics example from nutritional research and a gene-expression example from the field of virology.  相似文献   
36.
ABSTRACT: BACKGROUND: Less than 25% of patients with a pelvic mass who are presented to a gynecologist will eventually be diagnosed with epithelial ovarian cancer. Since there is no reliable test to differentiate between different ovarian tumors, accurate classification could facilitate adequate referral to a gynecological oncologist, improving survival. The goal of our study was to assess the potential value of a SELDI-TOF-MS based classifier for discriminating between patients with a pelvic mass. METHODS: Our study design included a well-defined patient population, stringent protocols and an independent validation cohort. We compared serum samples of 53 ovarian cancer patients, 18 patients with tumors of low malignant potential, and 57 patients with a benign ovarian tumor on different ProteinChip arrays. In addition, from a subset of 84 patients, tumor tissues were collected and microdissection was used to isolate a pure and homogenous cell population. RESULTS: Diagonal Linear Discriminant Analysis (DLDA) and Support Vector Machine (SVM) classification on serum samples comparing cancer versus benign tumors, yielded models with a classification accuracy of 71-81% (cross-validation), and 73-81% on the independent validation set. Cancer and benign tissues could be classified with 95-99% accuracy using cross-validation. Tumors of low malignant potential showed protein expression patterns different from both benign and cancer tissues. Remarkably, none of the peaks differentially expressed in serum samples were found to be differentially expressed in the tissue lysates of those same groups. CONCLUSION: Although SELDI-TOF-MS can produce reliable classification results in serum samples of ovarian cancer patients, it will not be applicable in routine patient care. On the other hand, protein profiling of microdissected tumor tissue may lead to a better understanding of oncogenesis and could still be a source of new serum biomarkers leading to novel methods for differentiating between different histological subtypes.  相似文献   
37.
Early events in the biosynthesis of alpha-glucosidase (EC 3.2.1.20) were studied in a wheat-germ cell-free translation system, using control and mutant RNA. In vitro, the primary translation product of the alpha-glucosidase mRNA is a 100 kDa protein. When canine microsomal membranes are added to the translation system, the nascent alpha-glucosidase precursor is cotranslationally transported across the microsomal membranes, yielding a 110 kDa glycosylated form. This protein has the same electrophoretic characteristics as the alpha-glucosidase precursor observed after in vivo labeling of control fibroblasts. Inhibition of glycosylation in vivo by tunicamycin or deglycosylation of the in vivo synthesized alpha-glucosidase precursor by glycopeptidase F reveals a core protein similar in molecular mass to the primary translation product. Total RNA from a patient with the adult form of glycogenosis type II is not able to direct the synthesis of normal amounts of alpha-glucosidase in vitro. Northern blot analysis of the RNA, using cloned alpha-glucosidase cDNA sequences as a probe, demonstrates that in this patient the amount of the 3.4 kb alpha-glucosidase mRNA is highly reduced. The results indicate that the synthesis or stability of the mRNA is affected.  相似文献   
38.
Genitopatellar syndrome (GPS) is a rare disorder in which patellar aplasia or hypoplasia is associated with external genital anomalies and severe intellectual disability. Using an exome-sequencing approach, we identified de novo mutations of KAT6B in five individuals with GPS; a single nonsense variant and three frameshift indels, including a 4 bp deletion observed in two cases. All identified mutations are located within the terminal exon of the gene and are predicted to generate a truncated protein product lacking evolutionarily conserved domains. KAT6B encodes a member of the MYST family of histone acetyltranferases. We demonstrate a reduced level of both histone H3 and H4 acetylation in patient-derived cells suggesting that dysregulation of histone acetylation is a direct functional consequence of GPS alleles. These findings define the genetic basis of GPS and illustrate the complex role of the regulation of histone acetylation during development.  相似文献   
39.
In this paper, we compare the performance of six different feature selection methods for LC-MS-based proteomics and metabolomics biomarker discovery—t test, the Mann–Whitney–Wilcoxon test (mww test), nearest shrunken centroid (NSC), linear support vector machine–recursive features elimination (SVM-RFE), principal component discriminant analysis (PCDA), and partial least squares discriminant analysis (PLSDA)—using human urine and porcine cerebrospinal fluid samples that were spiked with a range of peptides at different concentration levels. The ideal feature selection method should select the complete list of discriminating features that are related to the spiked peptides without selecting unrelated features. Whereas many studies have to rely on classification error to judge the reliability of the selected biomarker candidates, we assessed the accuracy of selection directly from the list of spiked peptides. The feature selection methods were applied to data sets with different sample sizes and extents of sample class separation determined by the concentration level of spiked compounds. For each feature selection method and data set, the performance for selecting a set of features related to spiked compounds was assessed using the harmonic mean of the recall and the precision (f-score) and the geometric mean of the recall and the true negative rate (g-score). We conclude that the univariate t test and the mww test with multiple testing corrections are not applicable to data sets with small sample sizes (n = 6), but their performance improves markedly with increasing sample size up to a point (n > 12) at which they outperform the other methods. PCDA and PLSDA select small feature sets with high precision but miss many true positive features related to the spiked peptides. NSC strikes a reasonable compromise between recall and precision for all data sets independent of spiking level and number of samples. Linear SVM-RFE performs poorly for selecting features related to the spiked compounds, even though the classification error is relatively low.Biomarkers play an important role in advancing medical research through the early diagnosis of disease and prognosis of treatment interventions (1, 2). Biomarkers may be proteins, peptides, or metabolites, as well as mRNAs or other kinds of nucleic acids (e.g. microRNAs) whose levels change in relation to the stage of a given disease and which may be used to accurately assign the disease stage of a patient. The accurate selection of biomarker candidates is crucial, because it determines the outcome of further validation studies and the ultimate success of efforts to develop diagnostic and prognostic assays with high specificity and sensitivity. The success of biomarker discovery depends on several factors: consistent and reproducible phenotyping of the individuals from whom biological samples are obtained; the quality of the analytical methodology, which in turn determines the quality of the collected data; the accuracy of the computational methods used to extract quantitative and molecular identity information to define the biomarker candidates from raw analytical data; and finally the performance of the applied statistical methods in the selection of a limited list of compounds with the potential to discriminate between predefined classes of samples. De novo biomarker research consists of a biomarker discovery part and a biomarker validation part (3). Biomarker discovery uses analytical techniques that try to measure as many compounds as possible in a relatively low number of samples. The goal of subsequent data preprocessing and statistical analysis is to select a limited number of candidates, which are subsequently subjected to targeted analyses in large number of samples for validation.Advanced technology, such as high-performance liquid chromatography–mass spectrometry (LC-MS),1 is increasingly applied in biomarker discovery research. Such analyses detect tens of thousands of compounds, as well as background-related signals, in a single biological sample, generating enormous amounts of multivariate data. Data preprocessing workflows reduce data complexity considerably by trying to extract only the information related to compounds resulting in a quantitative feature matrix, in which rows and columns correspond to samples and extracted features, respectively, or vice versa. Features may also be related to data preprocessing artifacts, and the ratio of such erroneous features to compound-related features depends on the performance of the data preprocessing workflow (4). Preprocessed LC-MS data sets contain a large number of features relative to the sample size. These features are characterized by their m/z value and retention time, and in the ideal case they can be combined and linked to compound identities such as metabolites, peptides, and proteins. In LC-MS-based proteomics and metabolomics studies, sample analysis is so time consuming that it is practically impossible to increase the number of samples to a level that balances the number of features in a data set. Therefore, the success of biomarker discovery depends on powerful feature selection methods that can deal with a low sample size and a high number of features. Because of the unfavorable statistical situation and the risk of overfitting the data, it is ultimately pivotal to validate the selected biomarker candidates in a larger set of independent samples, preferably in a double-blinded fashion, using targeted analytical methods (1).Biomarker selection is often based on classification methods that are preceded by feature selection methods (filters) or which have built-in feature selection modules (wrappers and embedded methods) that can be used to select a list of compounds/peaks/features that provide the best classification performance for predefined sample groups (e.g. healthy versus diseased) (5). Classification methods are able to classify an unknown sample into a predefined sample class. Univariate feature selection methods such as filters (t test or Wilcoxon–Mann–Whitney tests) cannot be used for sample classification. Other classification methods such as the nearest shrunken centroid method have intrinsic feature selection ability, whereas other classification methods such as principal component discriminant analysis (PCDA) and partial least squares regression coupled with discriminant analysis (PLSDA) should be augmented with a feature selection method. There are classifiers having no feature selection option that perform the classification using all variables, such as support vector machines that use non-linear kernels (6). Classification methods without the ability to select features cannot be used for biomarker discovery, because these methods aim to classify samples into predefined classes but cannot identify the limited number of variables (features or compounds) that form the basis of the classification (6, 7). Different statistical methods with feature selection have been developed according to the complexity of the analyzed data, and these have been extensively reviewed (5, 6, 8, 9). Ways of optimizing such methods to improve sensitivity and specificity are a major topic in current biomarker discovery research and in the many “omics-related” research areas (6, 10, 11). Comparisons of classification methods with respect to their classification and learning performance have been initiated. Van der Walt et al. (12) focused on finding the most accurate classifiers for simulated data sets with sample sizes ranging from 20 to 100. Rubingh et al. (13) compared the influence of sample size in an LC-MS metabolomics data set on the performance of three different statistical validation tools: cross validation, jack-knifing model parameters, and a permutation test. That study concluded that for small sample sets, the outcome of these validation methods is influenced strongly by individual samples and therefore cannot be trusted, and the validation tool cannot be used to indicate problems due to sample size or the representativeness of sampling. This implies that reducing the dimensionality of the feature space is critical when approaching a classification problem in which the number of features exceeds the number of samples by a large margin. Dimensionality reduction retains a smaller set of features to bring the feature space in line with the sample size and thus allow the application of classification methods that perform with acceptable accuracy only when the sample size and the feature size are similar.In this study we compared different classification methods focusing on feature selection in two types of spiked LC-MS data sets that mimic the situation of a biomarker discovery study. Our results provide guidelines for researchers who will engage in biomarker discovery or other differential profiling “omics” studies with respect to sample size and selecting the most appropriate feature selection method for a given data set. We evaluated the following approaches: univariate t test and Mann–Whitney–Wilcoxon test (mww test) with multiple testing correction (14), nearest shrunken centroid (NSC) (15, 16), support vector machine–recursive features elimination (SVM-RFE) (17), PLSDA (18), and PCDA (19). PCDA and PLSDA were combined with the rank-product as a feature selection criterion (20). These methods were evaluated with data sets having three characteristics: different biological background, varying sample size, and varying within- and between-class variability of the added compounds. Data were acquired via LC-MS from human urine and porcine cerebrospinal fluid (CSF) samples that were spiked with a set of known peptides (true positives) at different concentration levels. These samples were then combined in two classes containing peptides spiked at low and high concentration levels. The performance of the classification methods with feature selection was measured based on their ability to select features that were related to the spiked peptides. Because true positives were known in our data set, we compared performance based on the f-score (the harmonic mean of precision and recall) and the g-score (the geometric mean of accuracy).  相似文献   
40.
Reflections on univariate and multivariate analysis of metabolomics data   总被引:1,自引:0,他引:1  
Metabolomics experiments usually result in a large quantity of data. Univariate and multivariate analysis techniques are routinely used to extract relevant information from the data with the aim of providing biological knowledge on the problem studied. Despite the fact that statistical tools like the t test, analysis of variance, principal component analysis, and partial least squares discriminant analysis constitute the backbone of the statistical part of the vast majority of metabolomics papers, it seems that many basic but rather fundamental questions are still often asked, like: Why do the results of univariate and multivariate analyses differ? Why apply univariate methods if you have already applied a multivariate method? Why if I do not see something univariately I see something multivariately? In the present paper we address some aspects of univariate and multivariate analysis, with the scope of clarifying in simple terms the main differences between the two approaches. Applications of the t test, analysis of variance, principal component analysis and partial least squares discriminant analysis will be shown on both real and simulated metabolomics data examples to provide an overview on fundamental aspects of univariate and multivariate methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号