首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Genome-wide association studies (GWAS) have evolved over the last ten years into a powerful tool for investigating the genetic architecture of human disease. In this work, we review the key concepts underlying GWAS, including the architecture of common diseases, the structure of common human genetic variation, technologies for capturing genetic information, study designs, and the statistical methods used for data analysis. We also look forward to the future beyond GWAS.

What to Learn in This Chapter

  • Basic genetic concepts that drive genome-wide association studies
  • Genotyping technologies and common study designs
  • Statistical concepts for GWAS analysis
  • Replication, interpretation, and follow-up of association results
This article is part of the “Translational Bioinformatics” collection for PLOS Computational Biology.
  相似文献   

2.

Background:

Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy.

Methods:

We used data from 23 meta-analyses, each of which included 10–39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method.

Results:

Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p < 0.05) for either sensitivity or specificity in 8 meta-analyses (35%). Overall, specificity tended to be lower with higher disease prevalence; there was no such systematic effect for sensitivity.

Interpretation:

The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation.Diagnostic accuracy plays a central role in the evaluation of medical diagnostic tests. Test accuracy may be expressed as sensitivity and specificity, as positive and negative predictive values or as positive and negative likelihood ratios.1 Some feel that the positive and negative predictive values of a test are more clinically relevant measures than sensitivity and specificity. However, predictive values directly depend on disease prevalence and can therefore not directly be translated from one situation to another.2 In contrast, a test’s sensitivity and specificity are commonly believed not to vary with disease prevalence.35Stability of sensitivity and specificity is an assumption that underlies the use of Bayes theorem in clinical diagnosis. Bayes theorem can be applied in clinical practice by using the likelihood ratio of a test and the probability of the disease before the test was done (pretest probability) to estimate the probability of disease after the test was done.2 Because likelihood ratios are a function of sensitivity and specificity, it is assumed that the likelihood ratios also remain the same when prevalence varies.A number of studies have shown that sensitivity and specificity may not be as stable as thought.610 We previously summarized the possible mechanisms through which differences in disease prevalence may lead to changes in a test’s sensitivity and specificity.10 Prevalence affects diagnostic accuracy because of clinical variability or through artifactual differences, as described in the theoretical framework in 6,7 Artifactual differences can result from using additional exclusion criteria, verification bias or an imperfect reference standard. For example, using an imperfect reference standard may lead to an underestimate of diagnostic accuracy, but as prevalence increases, the extent to which this happens will vary.8,9

Table 1:

Theoretical framework of how disease prevalence and test accuracy may be related10
FactorEffect on prevalenceEffect on accuracy
Clinical variability
Patient spectrum
  • Distribution of symptoms and severity may change with varying prevalence
  • Differences in symptoms and severity influences sensitivity and specificity
Referral filter
  • How and through what care pathway patients are referred may influence the spectrum of disease in the population
  • A change in setting and patient spectrum may also alter a test’s sensitivity and specificity
Reader expectations
  • Prevalence influences reader expectations: if one knows that the prevalence should be high, then one’s intrinsic threshold may be lowered
  • Changing one’s intrinsic threshold will influence accuracy
Artifactual variability
Distorted inclusion of participants
  • Excluding patients with difficult to diagnose conditions may influence the prevalence
  • Excluding patients with difficult to diagnose conditions will overestimate the accuracy of a test
Verification bias
  • If not all patients receive the (same) reference standard, this influences prevalence
  • Verification bias has an effect on test accuracy
Imperfect reference standard
  • Prevalence will be over- or underestimated
  • Test accuracy may be underestimated; the extent of which varies with prevalence
Open in a separate windowIf these associations between prevalence and test accuracy are not just hypothetical, this may have immediate implications for the translation of research findings into clinical practice. It would imply that sensitivity and specificity of a test, estimated in one setting, cannot unconditionally be translated to a setting with a different disease prevalence. To document the magnitude of these effects, we reanalyzed a series of previously published meta-analyses that included studies of diagnostic test accuracy.  相似文献   

3.
Differences between individual human genomes, or between human and cancer genomes, range in scale from single nucleotide variants (SNVs) through intermediate and large-scale duplications, deletions, and rearrangements of genomic segments. The latter class, called structural variants (SVs), have received considerable attention in the past several years as they are a previously under appreciated source of variation in human genomes. Much of this recent attention is the result of the availability of higher-resolution technologies for measuring these variants, including both microarray-based techniques, and more recently, high-throughput DNA sequencing. We describe the genomic technologies and computational techniques currently used to measure SVs, focusing on applications in human and cancer genomics.

What to Learn in This Chapter

  • Current knowledge about the prevalence of structural variation in human and cancer genomes.
  • Strategies for using microarray and high-throughput DNA sequencing technologies to measure structural variation.
  • Computational techniques to detect structural variants from DNA sequencing data.
This article is part of the “Translational Bioinformatics” collection for PLOS Computational Biology.
  相似文献   

4.
“Big” molecules such as proteins and genes still continue to capture the imagination of most biologists, biochemists and bioinformaticians. “Small” molecules, on the other hand, are the molecules that most biologists, biochemists and bioinformaticians prefer to ignore. However, it is becoming increasingly apparent that small molecules such as amino acids, lipids and sugars play a far more important role in all aspects of disease etiology and disease treatment than we realized. This particular chapter focuses on an emerging field of bioinformatics called “chemical bioinformatics” – a discipline that has evolved to help address the blended chemical and molecular biological needs of toxicogenomics, pharmacogenomics, metabolomics and systems biology. In the following pages we will cover several topics related to chemical bioinformatics. First, a brief overview of some of the most important or useful chemical bioinformatic resources will be given. Second, a more detailed overview will be given on those particular resources that allow researchers to connect small molecules to diseases. This section will focus on describing a number of recently developed databases or knowledgebases that explicitly relate small molecules – either as the treatment, symptom or cause – to disease. Finally a short discussion will be provided on newly emerging software tools that exploit these databases as a means to discover new biomarkers or even new treatments for disease.

What to Learn in This Chapter

  • The meaning of chemical bioinformatics
  • Strengths and limitations of existing chemical bioinformatic databases
  • Using databases to learn about the cause and treatment of diseases
  • The Small Molecule Pathway Database (SMPDB)
  • The Human Metabolome Database (HMDB)
  • DrugBank
  • The Toxin and Toxin-Target Database (T3DB)
  • PolySearch and Metabolite Set Enrichment Analysis
This article is part of the “Translational Bioinformatics” collection for PLOS Computational Biology.
  相似文献   

5.
Genome-wide association study (GWAS) aims to discover genetic factors underlying phenotypic traits. The large number of genetic factors poses both computational and statistical challenges. Various computational approaches have been developed for large scale GWAS. In this chapter, we will discuss several widely used computational approaches in GWAS. The following topics will be covered: (1) An introduction to the background of GWAS. (2) The existing computational approaches that are widely used in GWAS. This will cover single-locus, epistasis detection, and machine learning methods that have been recently developed in biology, statistic, and computer science communities. This part will be the main focus of this chapter. (3) The limitations of current approaches and future directions.

What to Learn in This Chapter

  • The background of Genome-wide association study (GWAS).
  • The existing computational approaches that are widely used in GWAS. This will cover single-locus, epistasis detection, and machine learning methods.
  • The limitations of current approaches and future directions.
This article is part of the “Translational Bioinformatics” collection for PLOS Computational Biology.
  相似文献   

6.
Proteins do not function in isolation; it is their interactions with one another and also with other molecules (e.g. DNA, RNA) that mediate metabolic and signaling pathways, cellular processes, and organismal systems. Due to their central role in biological function, protein interactions also control the mechanisms leading to healthy and diseased states in organisms. Diseases are often caused by mutations affecting the binding interface or leading to biochemically dysfunctional allosteric changes in proteins. Therefore, protein interaction networks can elucidate the molecular basis of disease, which in turn can inform methods for prevention, diagnosis, and treatment. In this chapter, we will describe the computational approaches to predict and map networks of protein interactions and briefly review the experimental methods to detect protein interactions. We will describe the application of protein interaction networks as a translational approach to the study of human disease and evaluate the challenges faced by these approaches.

What to Learn in This Chapter

  • Experimental and computational methods to detect protein interactions
  • Protein networks and disease
  • Studying the genetic and molecular basis of disease
  • Using protein interactions to understand disease
This article is part of the “Translational Bioinformatics” collection for PLOS Computational Biology.
  相似文献   

7.
8.
Wenhui Mao and coauthors discuss possible implications of the COVID-19 pandemic for health aspirations in low- and middle-income countries.

Summary points
  • The Coronavirus Disease 2019 (COVID-19) pandemic threatens progress toward a “grand convergence” in global health—universal reduction in deaths from infections and maternal and child health conditions to low levels—and toward achieving universal health coverage (UHC).
  • Our analysis suggests that COVID-19 will exacerbate the difficulty of achieving grand convergence targets for tuberculosis (TB), maternal mortality, and, probably, for under-5 mortality. HIV targets are likely to be met.
  • By 2035, our analysis suggests that the public sectors of low-income countries (LICs) would only be able to finance about a third of the costs of a package of 120 essential non-COVID-19 health interventions through domestic sources, unless the country increases significantly the priority assigned to the health sector; lower middle-income countries (LMICs) would likewise only be able to finance a little less than half.
  • The likelihood of getting back on track for reaching grand convergence and UHC will depend on (i) how quickly COVID-19 vaccines can be deployed in LICs and LMICs; (ii) how much additional public sector health financing can be mobilized from external and domestic sources; and (iii) whether countries can rapidly strengthen and focus their health delivery systems.
  相似文献   

9.
Modern experimental strategies often generate genome-scale measurements of human tissues or cell lines in various physiological states. Investigators often use these datasets individually to help elucidate molecular mechanisms of human diseases. Here we discuss approaches that effectively weight and integrate hundreds of heterogeneous datasets to gene-gene networks that focus on a specific process or disease. Diverse and systematic genome-scale measurements provide such approaches both a great deal of power and a number of challenges. We discuss some such challenges as well as methods to address them. We also raise important considerations for the assessment and evaluation of such approaches. When carefully applied, these integrative data-driven methods can make novel high-quality predictions that can transform our understanding of the molecular-basis of human disease.

What to Learn in This Chapter

  • What a functional relationship network represents.
  • The fundamentals of Bayesian inference for genomic data integration.
  • How to build a network of functional relationships between genes using examples of functionally related genes and diverse experimental data.
  • How computational scientists study disease using data driven approaches, such as integrated networks of protein-protein functional relationships.
  • Strategies to assess predictions from a functional relationship network
This article is part of the “Translational Bioinformatics” collection for PLOS Computational Biology.
  相似文献   

10.
Recent advances in automated high-resolution fluorescence microscopy and robotic handling have made the systematic and cost effective study of diverse morphological changes within a large population of cells possible under a variety of perturbations, e.g., drugs, compounds, metal catalysts, RNA interference (RNAi). Cell population-based studies deviate from conventional microscopy studies on a few cells, and could provide stronger statistical power for drawing experimental observations and conclusions. However, it is challenging to manually extract and quantify phenotypic changes from the large amounts of complex image data generated. Thus, bioimage informatics approaches are needed to rapidly and objectively quantify and analyze the image data. This paper provides an overview of the bioimage informatics challenges and approaches in image-based studies for drug and target discovery. The concepts and capabilities of image-based screening are first illustrated by a few practical examples investigating different kinds of phenotypic changes caEditorsused by drugs, compounds, or RNAi. The bioimage analysis approaches, including object detection, segmentation, and tracking, are then described. Subsequently, the quantitative features, phenotype identification, and multidimensional profile analysis for profiling the effects of drugs and targets are summarized. Moreover, a number of publicly available software packages for bioimage informatics are listed for further reference. It is expected that this review will help readers, including those without bioimage informatics expertise, understand the capabilities, approaches, and tools of bioimage informatics and apply them to advance their own studies.

What to Learn in This Chapter

  • What automated approaches are necessary for analysis of phenotypic changes, especially for drug and target discovery?
  • What quantitative features and machine learning approaches are commonly used for quantifying phenotypic changes?
  • What resources are available for bioimage informatics studies?
This article is part of the “Translational Bioinformatics" collection for PLOS Computational Biology.
  相似文献   

11.
12.
Abstract: The combination of improved genomic analysis methods, decreasing genotyping costs, and increasing computing resources has led to an explosion of clinical genomic knowledge in the last decade. Similarly, healthcare systems are increasingly adopting robust electronic health record (EHR) systems that not only can improve health care, but also contain a vast repository of disease and treatment data that could be mined for genomic research. Indeed, institutions are creating EHR-linked DNA biobanks to enable genomic and pharmacogenomic research, using EHR data for phenotypic information. However, EHRs are designed primarily for clinical care, not research, so reuse of clinical EHR data for research purposes can be challenging. Difficulties in use of EHR data include: data availability, missing data, incorrect data, and vast quantities of unstructured narrative text data. Structured information includes billing codes, most laboratory reports, and other variables such as physiologic measurements and demographic information. Significant information, however, remains locked within EHR narrative text documents, including clinical notes and certain categories of test results, such as pathology and radiology reports. For relatively rare observations, combinations of simple free-text searches and billing codes may prove adequate when followed by manual chart review. However, to extract the large cohorts necessary for genome-wide association studies, natural language processing methods to process narrative text data may be needed. Combinations of structured and unstructured textual data can be mined to generate high-validity collections of cases and controls for a given condition. Once high-quality cases and controls are identified, EHR-derived cases can be used for genomic discovery and validation. Since EHR data includes a broad sampling of clinically-relevant phenotypic information, it may enable multiple genomic investigations upon a single set of genotyped individuals. This chapter reviews several examples of phenotype extraction and their application to genetic research, demonstrating a viable future for genomic discovery using EHR-linked data.

What to Learn in This Chapter

  • Describe the types of information available in Electronic Health Records (EHRs), and the relative sensitivity and positive predictive value of each
  • Describe the difference between unstructured and structured information in the EHR
  • Describe methods for developing accurate phenotype algorithms that integrate structured and unstructured EHR information, and the roles played by billing codes, laboratory values, medication data, and natural language processing
  • Describe recent uses of EHR-derived phenotypes to study genome-phenome relationships
  • Describe the cost advantages unique to EHR-linked biobanks, and the ability to reuse genetic data for many studies
  • Understand the role of EHRs to enable phenome-wide association studies of genetic variants
This article is part of the “Translational Bioinformatics” collection for PLOS Computational Biology.
  相似文献   

13.
Jan Hontelez and co-authors discuss the use of different types of evidence to inform HIV program integration.

Summary points
  • Sustainable Development Goal 3 aims to “ensure healthy lives and promote well-being for all at all ages” and has set a target of achieving global universal health coverage, representing a major policy shift away from mostly disease-specific “vertical programmes”.
  • While health service integration can be a promising strategy to improve healthcare coverage, health outcomes, and efficiency, the exact impact of integration in different settings is hard to predict, and policy makers need to choose from a large variety of integration strategies and opportunities with varying levels of scientific evidence.
  • Using the case of health service integration for HIV in low- and middle-income countries, we outline implementation strategies for integration opportunities with lacking or scarce high-level causal evidence, based on existing frameworks and methodologies from within and beyond healthcare and implementation science.
  • Proper use of scientific evidence in other contexts requires adequate and systematic assessments of the transportability of an intervention. Several methods exist that allow for judging transferability and comprehensively identifying key context-specific indicators across studies that can affect the reported impact of interventions.
  • When (transferable) evidence is absent, we propose that by drawing on well-established design and implementation methodologies—underpinned by ongoing learning and iterative improvement of local service delivery strategies—countries could substantially improve decision-making even in the absence of scientific evidence.
  • Reaching the goal of making the HIV response an integral part of a larger, universal, people-centred health system that meets the needs and requirements of citizens can be facilitated by applying lessons learned from implementation science and novel design methodologies.
  相似文献   

14.
β-arrestins, ubiquitous cellular scaffolding proteins that act as signaling mediators of numerous critical cellular pathways, are attractive therapeutic targets because they promote tumorigenesis in several tumor models. However, targeting scaffolding proteins with traditional small molecule drugs has been challenging. Inhibition of β-arrestin 2 with a novel aptamer impedes multiple oncogenic signaling pathways simultaneously. Additionally, delivery of the β-arrestin 2-targeting aptamer into leukemia cells through coupling to a recently described cancer cell-specific delivery aptamer, inhibits multiple β-arrestin-mediated signaling pathways known to be required for chronic myelogenous leukemia (CML) disease progression, and impairs tumorigenic growth in CML patient samples. The ability to target scaffolding proteins such as β-arrestin 2 with RNA aptamers may prove beneficial as a therapeutic strategy.

Highlights

  • An RNA aptamer inhibits β-arrestin 2 activity.
  • Inhibiting β-arrestin 2 impedes multiple tumorigenic pathways simultaneously.
  • The therapeutic aptamer is delivered to cancer cells using a cell-specific DNA aptamer.
  • Targeting β-arrestin 2 inhibits tumor progression in CML models and patient samples.
  相似文献   

15.
Objective To provide a comprehensive survey of the content and quality of intervention studies relevant to the treatment of schizophrenia.Design Data were extracted from 2000 trials on the Cochrane Schizophrenia Group’s register.Main outcome measures Type and date of publication, country of origin, language, size of study, treatment setting, participant group, interventions, outcomes, and quality of study.Results Hospital based drug trials undertaken in the United States were dominant in the sample (54%). Generally, studies were short (54%<6 weeks), small (mean number of patients 65), and poorly reported (64% had a quality score of ⩽2 (maximum score 5)). Over 600 different interventions were studied in these trials, and 640 different rating scales were used to measure outcome.Conclusions Half a century of studies of limited quality, duration, and clinical utility leave much scope for well planned, conducted, and reported trials. The drug regulatory authorities should stipulate that the results of both explanatory and pragmatic trials are necessary before a compound is given a licence for everyday use.

Key messages

  • The advent of randomised controlled trials coincided with many new drug treatments for schizophrenia
  • This survey of 2000 randomised controlled trials of treatment for schizophrenia found that the reporting of key aspects of trial methods could easily be improved
  • The consistently poor quality of reporting is likely to have resulted in an overoptimistic estimation of the effects of treatments
  • Large studies, of long duration, investigating outcomes of importance to clinicians and patients are needed
  相似文献   

16.
Advanced statistical methods used to analyze high-throughput data such as gene-expression assays result in long lists of “significant genes.” One way to gain insight into the significance of altered expression levels is to determine whether Gene Ontology (GO) terms associated with a particular biological process, molecular function, or cellular component are over- or under-represented in the set of genes deemed significant. This process, referred to as enrichment analysis, profiles a gene-set, and is widely used to makes sense of the results of high-throughput experiments. The canonical example of enrichment analysis is when the output dataset is a list of genes differentially expressed in some condition. To determine the biological relevance of a lengthy gene list, the usual solution is to perform enrichment analysis with the GO. We can aggregate the annotating GO concepts for each gene in this list, and arrive at a profile of the biological processes or mechanisms affected by the condition under study. While GO has been the principal target for enrichment analysis, the methods of enrichment analysis are generalizable. We can conduct the same sort of profiling along other ontologies of interest. Just as scientists can ask “Which biological process is over-represented in my set of interesting genes or proteins?” we can also ask “Which disease (or class of diseases) is over-represented in my set of interesting genes or proteins?“. For example, by annotating known protein mutations with disease terms from the ontologies in BioPortal, Mort et al. recently identified a class of diseases—blood coagulation disorders—that were associated with a 14-fold depletion in substitutions at O-linked glycosylation sites. With the availability of tools for automatic annotation of datasets with terms from disease ontologies, there is no reason to restrict enrichment analyses to the GO. In this chapter, we will discuss methods to perform enrichment analysis using any ontology available in the biomedical domain. We will review the general methodology of enrichment analysis, the associated challenges, and discuss the novel translational analyses enabled by the existence of public, national computational infrastructure and by the use of disease ontologies in such analyses.

What to Learn in This Chapter

  • Review the commonly used approach of Gene Ontology based enrichment analysis
  • Understand the pitfalls associated with current approaches
  • Understand the national infrastructure available for using alternative ontologies for enrichment analysis
  • Learn about a generalized enrichment analysis workflow and its application using disease ontologies
This article is part of the “Translational Bioinformatics” collection for PLOS Computational Biology.
  相似文献   

17.
18.
In the past decade, many guidance documents have been issued through collaboration of global organizations and regulatory authorities. Most of these are applicable to new products, but there is a risk that currently marketed products will not meet the new compliance standards during audits and inspections while companies continue to make changes through the product life cycle for continuous improvement or market demands. This discussion presents different strategies to bringing drug product marketing applications to meet current and emerging standards. It also discusses stability and method designs to meet process validation and global development efforts.At the 2014 American Association of Pharmaceutical Scientists (AAPS) annual meeting in San Diego, CA, Yan Wu (Merck) and Anita Freed (Pfizer) led a symposium entitled “Bringing Drug Product Marketing Applications to Current Regulatory Standards: Trials and Tribulations.” This symposium was very timely as this topic is a growing industry concern, evidenced by over 300 attendees, and considering the new guidances (18) that have been established over the past decade. While most of these quality standards are applicable to new drug products, there is a risk that currently marketed products, known as legacy products, will not meet the new compliance standards during audits and inspections. Companies also need to continuously make process or method changes for in-line products as part of product life cycle management efforts or to meet different market needs. If legacy (or in-line) products undergo a change, the question is how much extra effort is needed to have these products meet current standards to support the associated submission. This symposium addressed these issues and offered modeling tools using existing data or other approaches and case studies to effectively manage post-approval changes. Presentations included the following:
  • Modeling historical data to support process and method stability changes
  • Food and Drug Administration (FDA) perspectives on application of International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) Q8 to legacy products
  • Assessment of impact on stability with manufacturing, packaging, and/or method changes
  • Applying Association of Southeast Asian Nations (ASEAN) stability requirements to legacy products and managing specifications across climatic zones
This paper provides an overview of the presentations and highlights strategies and points of consideration when bringing marketing applications of the legacy drug products to current and emerging standards.  相似文献   

19.
Disease-causing aberrations in the normal function of a gene define that gene as a disease gene. Proving a causal link between a gene and a disease experimentally is expensive and time-consuming. Comprehensive prioritization of candidate genes prior to experimental testing drastically reduces the associated costs. Computational gene prioritization is based on various pieces of correlative evidence that associate each gene with the given disease and suggest possible causal links. A fair amount of this evidence comes from high-throughput experimentation. Thus, well-developed methods are necessary to reliably deal with the quantity of information at hand. Existing gene prioritization techniques already significantly improve the outcomes of targeted experimental studies. Faster and more reliable techniques that account for novel data types are necessary for the development of new diagnostics, treatments, and cure for many diseases.
This article is part of the “Translational Bioinformatics" collection for PLOS Computational Biology.

What to Learn in This Chapter

  • Identification of specific disease genes is complicated by gene pleiotropy, polygenic nature of many diseases, varied influence of environmental factors, and overlying genome variation.
  • Gene prioritization is the process of assigning likelihood of gene involvement in generating a disease phenotype. This approach narrows down, and arranges in the order of likelihood in disease involvement, the set of genes to be tested experimentally.
  • The gene “priority" in disease is assigned by considering a set of relevant features such as gene expression and function, pathway involvement, and mutation effects.
  • In general, disease genes tend to 1) interact with other disease genes, 2) harbor functionally deleterious mutations, 3) code for proteins localizing to the affected biological compartment (pathway, cellular space, or tissue), 4) have distinct sequence properties such as longer length and a higher number of exons, 5) have more orthologues and fewer paralogues.
  • Data sources (directly experimental, extracted from knowledge-bases, or text-mining based) and mathematical/computational models used for gene prioritization vary widely.
  相似文献   

20.
Feijoa sellowiana leaves and fruits have been investigated as a source of diverse bioactive metabolites. Extract and eight metabolites isolated from F. sellowiana leaves were evaluated for their enzymatic inhibitory activity against α-glucosidase, amylase, tyrosinase, acetylcholinestrerase and butyrylcholinesterase both in vitro and in silico. Feijoa leaves’ extract showed strong antioxidant activity and variable levels of inhibitions against target enzymes with a strong anti-tyrosinase activity (115.85 mg Kojic acid equivalent/g). Additionally, α-tocopherol emerged as a potent inhibitor of AChE and BChE (5.40 & 10.38 mmol galantamine equivalent/g, respectively). Which was further investigated through molecular docking and found to develop key enzymatic interactions in AChE and BChE active sites. Also, primetin showed good anti BChE (11.70 mmol galantamine equivalent/g) and anti-tyrosinase inhibition (90.06 mmol Kojic acid equivalent/g) which was also investigated by molecular docking studies.

Highlights

  • Isolation of eight bioactive constituents from Feijoa sellowiana leaves.
  • In vitro assays using different enzymatic drug targets were investigated.
  • In silico study was performed to define compound interactions with target proteins.
  • Feijoa leaf is an excellent source of anti-AChE and antityrosinase bioactives.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号