首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The DNA molecules that can be extracted from archaeological and palaeontological remains are often degraded and massively contaminated with environmental microbial material. This reduces the efficacy of shotgun approaches for sequencing ancient genomes, despite the decreasing sequencing costs of high‐throughput sequencing (HTS). Improving the recovery of endogenous molecules from the DNA extraction and purification steps could, thus, help advance the characterization of ancient genomes. Here, we apply the three most commonly used DNA extraction methods to five ancient bone samples spanning a ~30 thousand year temporal range and originating from a diversity of environments, from South America to Alaska. We show that methods based on the purification of DNA fragments using silica columns are more advantageous than in solution methods and increase not only the total amount of DNA molecules retrieved but also the relative importance of endogenous DNA fragments and their molecular diversity. Therefore, these methods provide a cost‐effective solution for downstream applications, including DNA sequencing on HTS platforms.  相似文献   

2.
Histopathological samples are a treasure-trove of DNA for clinical research. However, the quality of DNA can vary depending on the source or extraction method applied. Thus a standardized and cost-effective workflow for the qualification of DNA preparations is essential to guarantee interlaboratory reproducible results. The qualification process consists of the quantification of double strand DNA (dsDNA) and the assessment of its suitability for downstream applications, such as high-throughput next-generation sequencing. We tested the two most frequently used instrumentations to define their role in this process: NanoDrop, based on UV spectroscopy, and Qubit 2.0, which uses fluorochromes specifically binding dsDNA. Quantitative PCR (qPCR) was used as the reference technique as it simultaneously assesses DNA concentration and suitability for PCR amplification. We used 17 genomic DNAs from 6 fresh-frozen (FF) tissues, 6 formalin-fixed paraffin-embedded (FFPE) tissues, 3 cell lines, and 2 commercial preparations. Intra- and inter-operator variability was negligible, and intra-methodology variability was minimal, while consistent inter-methodology divergences were observed. In fact, NanoDrop measured DNA concentrations higher than Qubit and its consistency with dsDNA quantification by qPCR was limited to high molecular weight DNA from FF samples and cell lines, where total DNA and dsDNA quantity virtually coincide. In partially degraded DNA from FFPE samples, only Qubit proved highly reproducible and consistent with qPCR measurements. Multiplex PCR amplifying 191 regions of 46 cancer-related genes was designated the downstream application, using 40 ng dsDNA from FFPE samples calculated by Qubit. All but one sample produced amplicon libraries suitable for next-generation sequencing. NanoDrop UV-spectrum verified contamination of the unsuccessful sample. In conclusion, as qPCR has high costs and is labor intensive, an alternative effective standard workflow for qualification of DNA preparations should include the sequential combination of NanoDrop and Qubit to assess the purity and quantity of dsDNA, respectively.  相似文献   

3.
ABSTRACT: BACKGROUND: Solution-based targeted genomic enrichment (TGE) protocols permit selective sequencing of genomic regions of interest on a massively parallel scale. These protocols could be improved by: 1) modifying or eliminating time consuming steps; 2) increasing yield to reduce input DNA and excessive PCR cycling; and 3) enhancing reproducible. RESULTS: We developed a solution-based TGE method for downstream Illumina sequencing in a non-automated workflow, adding standard Illumina barcode indexes during the post-hybridization amplification to allow for sample pooling prior to sequencing. The method utilizes Agilent SureSelect baits, primers and hybridization reagents for the capture, off-the-shelf reagents for the library preparation steps, and adaptor oligonucleotides for Illumina paired-end sequencing purchased directly from an oligonucleotide manufacturing company. CONCLUSIONS: This solution-based TGE method for Illumina sequencing is optimized for small- or medium-sized laboratories and addresses the weaknesses of standard protocols by reducing the amount of input DNA required, increasing capture yield, optimizing efficiency, and improving reproducibility.  相似文献   

4.

Background

Cancer re-sequencing programs rely on DNA isolated from fresh snap frozen tissues, the preparation of which is combined with additional preservation efforts. Tissue samples at pathology departments are routinely stored as formalin-fixed and paraffin-embedded (FFPE) samples and their use would open up access to a variety of clinical trials. However, FFPE preparation is incompatible with many down-stream molecular biology techniques such as PCR based amplification methods and gene expression studies.

Methodology/Principal Findings

Here we investigated the sample quality requirements of FFPE tissues for massively parallel short-read sequencing approaches. We evaluated key variables of pre-fixation, fixation related and post-fixation processes that occur in routine medical service (e.g. degree of autolysis, duration of fixation and of storage). We also investigated the influence of tissue storage time on sequencing quality by using material that was up to 18 years old. Finally, we analyzed normal and tumor breast tissues using the Sequencing by Synthesis technique (Illumina Genome Analyzer, Solexa) to simultaneously localize genome-wide copy number alterations and to detect genomic variations such as substitutions and point-deletions and/or insertions in FFPE tissue samples.

Conclusions/Significance

The application of second generation sequencing techniques on small amounts of FFPE material opens up the possibility to analyze tissue samples which have been collected during routine clinical work as well as in the context of clinical trials. This is in particular important since FFPE samples are amply available from surgical tumor resections and histopathological diagnosis, and comprise tissue from precursor lesions, primary tumors, lymphogenic and/or hematogenic metastases. Large-scale studies using this tissue material will result in a better prediction of the prognosis of cancer patients and the early identification of patients which will respond to therapy.  相似文献   

5.
As new technologies come within reach for the average cytogenetic laboratory, the study of chromosome structure has become increasingly more sophisticated. Resolution has improved from karyotyping (in which whole chromosomes are discernible) to fluorescence in situ hybridization and comparative genomic hybridization (CGH, with which specific megabase regions are visualized), array-based CGH (aCGH, examining hundreds of base pairs), and next-generation sequencing (providing single base pair resolution). Whole genome next-generation sequencing remains a cost-prohibitive method for many investigators. Meanwhile, the cost of aCGH has been reduced during recent years, even as resolution has increased and protocols have simplified. However, aCGH presents its own set of unique challenges. DNA of sufficient quantity and quality to hybridize to arrays and provide meaningful results is required. This is especially difficult for DNA from formalin-fixed paraffin-embedded (FFPE) tissues. Here, we compare three different methods for acquiring DNA of sufficient length, purity, and “amplifiability” for aCGH and other downstream applications. Phenol–chloroform extraction and column-based commercial kits were compared with adaptive focused acoustics (AFA). Of the three extraction methods, AFA samples showed increased amplicon length and decreased polymerase chain reaction (PCR) failure rate. These findings support AFA as an improvement over previous DNA extraction methods for FFPE tissues.  相似文献   

6.

Background  

Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available.  相似文献   

7.
Efforts to detect and investigate key oncogenic mutations have proven valuable to facilitate the appropriate treatment for cancer patients. The establishment of high-throughput, massively parallel "next-generation" sequencing has aided the discovery of many such mutations. To enhance the clinical and translational utility of this technology, platforms must be high-throughput, cost-effective, and compatible with formalin-fixed paraffin embedded (FFPE) tissue samples that may yield small amounts of degraded or damaged DNA. Here, we describe the preparation of barcoded and multiplexed DNA libraries followed by hybridization-based capture of targeted exons for the detection of cancer-associated mutations in fresh frozen and FFPE tumors by massively parallel sequencing. This method enables the identification of sequence mutations, copy number alterations, and select structural rearrangements involving all targeted genes. Targeted exon sequencing offers the benefits of high throughput, low cost, and deep sequence coverage, thus conferring high sensitivity for detecting low frequency mutations.  相似文献   

8.

Background

The tremendous output of massive parallel sequencing technologies requires automated robust and scalable sample preparation methods to fully exploit the new sequence capacity.

Methodology

In this study, a method for automated library preparation of RNA prior to massively parallel sequencing is presented. The automated protocol uses precipitation onto carboxylic acid paramagnetic beads for purification and size selection of both RNA and DNA. The automated sample preparation was compared to the standard manual sample preparation.

Conclusion/Significance

The automated procedure was used to generate libraries for gene expression profiling on the Illumina HiSeq 2000 platform with the capacity of 12 samples per preparation with a significantly improved throughput compared to the standard manual preparation. The data analysis shows consistent gene expression profiles in terms of sensitivity and quantification of gene expression between the two library preparation methods.  相似文献   

9.
The increasing use of high‐throughput sequencing platforms has made the isolation of pure, high molecular weight DNA a primary concern for studies of a diverse range of organisms. Purification of DNA remains a significant challenge in many tissue and sample types due to various organic and inorganic molecules that coprecipitate with nucleic acids. Molluscs, for example, contain high concentrations of polysaccharides which often coprecipitate with DNA and can inhibit downstream enzymatic reactions. We modified a low‐salt CTAB (MoLSC) extraction protocol to accommodate contaminant‐rich animal tissues and compared this method to a standard CTAB extraction protocol and two commercially available animal tissue DNA extraction kits using oyster adductor muscle. Comparisons of purity and molecular integrity showed that our in‐house protocol yielded genomic DNA generally free of contaminants and shearing, whereas the traditional CTAB method and some of the commercial kits yielded DNA unsuitable for some applications of massively parallel sequencing. Our open‐source MoLSC protocol provides a cost‐effective, scalable, alternative DNA extraction method that can be easily optimized and adapted for sequencing applications in other contaminant‐rich samples.  相似文献   

10.

Background

Formalin fixed paraffin embedded (FFPE) tumor samples are a major source of DNA from patients in cancer research. However, FFPE is a challenging material to work with due to macromolecular fragmentation and nucleic acid crosslinking. FFPE tissue particularly possesses challenges for methylation analysis and for preparing sequencing-based libraries relying on bisulfite conversion. Successful bisulfite conversion is a key requirement for sequencing-based methylation analysis.

Methods

Here we describe a complete and streamlined workflow for preparing next generation sequencing libraries for methylation analysis from FFPE tissues. This includes, counting cells from FFPE blocks and extracting DNA from FFPE slides, testing bisulfite conversion efficiency with a polymerase chain reaction (PCR) based test, preparing reduced representation bisulfite sequencing libraries and massively parallel sequencing.

Results

The main features and advantages of this protocol are:
  • An optimized method for extracting good quality DNA from FFPE tissues.
  • An efficient bisulfite conversion and next generation sequencing library preparation protocol that uses 50 ng DNA from FFPE tissue.
  • Incorporation of a PCR-based test to assess bisulfite conversion efficiency prior to sequencing.

Conclusions

We provide a complete workflow and an integrated protocol for performing DNA methylation analysis at the genome-scale and we believe this will facilitate clinical epigenetic research that involves the use of FFPE tissue.
  相似文献   

11.
All next-generation sequencing (NGS) procedures include assays performed at the laboratory bench ("wet bench") and data analyses conducted using bioinformatics pipelines ("dry bench"). Both elements are essential to produce accurate and reliable results, which are particularly critical for clinical laboratories. Targeted NGS technologies have increasingly found favor in oncology applications to help advance precision medicine objectives, yet the methods often involve disconnected and variable wet and dry bench workflows and uncoordinated reagent sets. In this report, we describe a method for sequencing challenging cancer specimens with a 21-gene panel as an example of a comprehensive targeted NGS system. The system integrates functional DNA quantification and qualification, single-tube multiplexed PCR enrichment, and library purification and normalization using analytically-verified, single-source reagents with a standalone bioinformatics suite. As a result, accurate variant calls from low-quality and low-quantity formalin-fixed, paraffin-embedded (FFPE) and fine-needle aspiration (FNA) tumor biopsies can be achieved. The method can routinely assess cancer-associated variants from an input of 400 amplifiable DNA copies, and is modular in design to accommodate new gene content. Two different types of analytically-defined controls provide quality assurance and help safeguard call accuracy with clinically-relevant samples. A flexible "tag" PCR step embeds platform-specific adaptors and index codes to allow sample barcoding and compatibility with common benchtop NGS instruments. Importantly, the protocol is streamlined and can produce 24 sequence-ready libraries in a single day. Finally, the approach links wet and dry bench processes by incorporating pre-analytical sample quality control results directly into the variant calling algorithms to improve mutation detection accuracy and differentiate false-negative and indeterminate calls. This targeted NGS method uses advances in both wetware and software to achieve high-depth, multiplexed sequencing and sensitive analysis of heterogeneous cancer samples for diagnostic applications.  相似文献   

12.
In many laboratories, PCR has become a routine method for the sensitive diagnosis of Pneumocystis carinii in patient samples. In contrast, quantification of fungal numbers in in vitro setups still largely relies on more conventional procedures such as histological stainings. These are time consuming and their applications are limited when dealing with small fungal numbers contaminated with tissue and cellular debris. This study presents a sensitive and rapid method for P. carinii quantification based on PCR analysis that can be easily integrated into standard detection procedures without requiring any major additional steps. P. carinii-specific PCR performed with total DNA extracted from both standard samples with known fungal numbers and experimental samples was quantified relative to PCR products of a standard concentration from a control plasmid added prior to DNA extraction. This measure controlled for variations in DNA extraction and PCR efficiency among the samples to be compared. The correlation between analyzed P. carinii-specific DNA and the actual fungal numbers employed was highly significant.  相似文献   

13.
The quality and yield of extracted DNA are critical for the majority of downstream applications in molecular biology. Moreover, molecular techniques such as quantitative real-time PCR (qPCR) are becoming increasingly widespread; thus, validation and cross-laboratory comparison of data require standardization of upstream experimental procedures. DNA extraction methods depend on the type and size of starting material(s) used. As such, the extraction of template DNA is arguably the most significant variable when cross-comparing data from different laboratories. Here, we describe a reliable, inexpensive and rapid method of DNA purification that is equally applicable to small or large scale or high-throughput purification of DNA. The protocol relies on a CTAB-based buffer for cell lysis and further purification of DNA with phenol : chloroform : isoamyl alcohol. The protocol has been used successfully for DNA purification from rumen fluid and plant cells. Moreover, after slight alterations, the same protocol was used for large-scale extraction of DNA from pure cultures of Gram-positive and Gram-negative bacteria. The yield of the DNA obtained with this method exceeded that from the same samples using commercial kits, and the quality was confirmed by successful qPCR applications.  相似文献   

14.
Current efforts to recover the Neandertal and mammoth genomes by 454 DNA sequencing demonstrate the sensitivity of this technology. However, routine 454 sequencing applications still require microgram quantities of initial material. This is due to a lack of effective methods for quantifying 454 sequencing libraries, necessitating expensive and labour-intensive procedures when sequencing ancient DNA and other poor DNA samples. Here we report a 454 sequencing library quantification method based on quantitative PCR that effectively eliminates these limitations. We estimated both the molecule numbers and the fragment size distributions in sequencing libraries derived from Neandertal DNA extracts, SAGE ditags and bonobo genomic DNA, obtaining optimal sequencing yields without performing any titration runs. Using this method, 454 sequencing can routinely be performed from as little as 50 pg of initial material without titration runs, thereby drastically reducing costs while increasing the scope of sample throughput and protocol development on the 454 platform. The method should also apply to Illumina/Solexa and ABI/SOLiD sequencing, and should therefore help to widen the accessibility of all three platforms.  相似文献   

15.
We present dial-out PCR, a highly parallel method for retrieving accurate DNA molecules for gene synthesis. A complex library of DNA molecules is modified with unique flanking tags before massively parallel sequencing. Tag-directed primers then enable the retrieval of molecules with desired sequences by PCR. Dial-out PCR enables multiplex in vitro clone screening and is a compelling alternative to in vivo cloning and Sanger sequencing for accurate gene synthesis.  相似文献   

16.
The use of next-generation sequencing technologies to produce genomic copy number data has recently been described. Most approaches, however, reply on optimal starting DNA, and are therefore unsuitable for the analysis of formalin-fixed paraffin-embedded (FFPE) samples, which largely precludes the analysis of many tumour series. We have sought to challenge the limits of this technique with regards to quality and quantity of starting material and the depth of sequencing required. We confirm that the technique can be used to interrogate DNA from cell lines, fresh frozen material and FFPE samples to assess copy number variation. We show that as little as 5 ng of DNA is needed to generate a copy number karyogram, and follow this up with data from a series of FFPE biopsies and surgical samples. We have used various levels of sample multiplexing to demonstrate the adjustable resolution of the methodology, depending on the number of samples and available resources. We also demonstrate reproducibility by use of replicate samples and comparison with microarray-based comparative genomic hybridization (aCGH) and digital PCR. This technique can be valuable in both the analysis of routine diagnostic samples and in examining large repositories of fixed archival material.  相似文献   

17.
DNA‐assisted proteomics technologies enable ultra‐sensitive measurements in multiplex format using DNA‐barcoded affinity reagents. Although numerous antibodies are available, nowadays targeting nearly the complete human proteome, the majority is not accessible at the quantity, concentration, or purity recommended for most bio‐conjugation protocols. Here, we introduce a magnetic bead‐assisted DNA‐barcoding approach, applicable for several antibodies in parallel, as well as reducing required reagents quantities up to a thousand‐fold. The success of DNA‐barcoding and retained functionality of antibodies were demonstrated in sandwich immunoassays and standard quantitative Immuno‐PCR assays. Specific DNA‐barcoding of antibodies for multiplex applications was presented on suspension bead arrays with read‐out on a massively parallel sequencing platform in a procedure denoted Immuno‐Sequencing. Conclusively, human plasma samples were analyzed to indicate the functionality of barcoded antibodies in intended proteomics applications.  相似文献   

18.
Tumor specimens are often preserved as formalin-fixed paraffin-embedded (FFPE) tissue blocks, the most common clinical source for DNA sequencing. Herein, we evaluated the effect of pre-sequencing parameters to guide proper sample selection for targeted gene sequencing. Data from 113 FFPE lung tumor specimens were collected, and targeted gene sequencing was performed. Libraries were constructed using custom probes and were paired-end sequenced on a next generation sequencing platform. A PCR-based quality control (QC) assay was utilized to determine DNA quality, and a ratio was generated in comparison to control DNA. We observed that FFPE storage time, PCR/QC ratio, and DNA input in the library preparation were significantly correlated to most parameters of sequencing efficiency including depth of coverage, alignment rate, insert size, and read quality. A combined score using the three parameters was generated and proved highly accurate to predict sequencing metrics. We also showed wide read count variability within the genome, with worse coverage in regions of low GC content like in KRAS. Sample quality and GC content had independent effects on sequencing depth, and the worst results were observed in regions of low GC content in samples with poor quality. Our data confirm that FFPE samples are a reliable source for targeted gene sequencing in cancer, provided adequate sample quality controls are exercised. Tissue quality should be routinely assessed for pre-analytical factors, and sequencing depth may be limited in genomic regions of low GC content if suboptimal samples are utilized.  相似文献   

19.

Background

DNA extraction is a routine step in many insect molecular studies. A variety of methods have been used to isolate DNA molecules from insects, and many commercial kits are available. Extraction methods need to be evaluated for their efficiency, cost, and side effects such as DNA degradation during extraction.

Methodology/Principal Findings

From individual western corn rootworm beetles, Diabrotica virgifera virgifera, DNA extractions by the SDS method, CTAB method, DNAzol® reagent, Puregene® solutions and DNeasy® column were compared in terms of DNA quantity and quality, cost of materials, and time consumed. Although all five methods resulted in acceptable DNA concentrations and absorbance ratios, the SDS and CTAB methods resulted in higher DNA yield (ng DNA vs. mg tissue) at much lower cost and less degradation as revealed on agarose gels. The DNeasy® kit was most time-efficient but was the costliest among the methods tested. The effects of ethanol volume, temperature and incubation time on precipitation of DNA were also investigated. The DNA samples obtained by the five methods were tested in PCR for six microsatellites located in various positions of the beetle''s genome, and all samples showed successful amplifications.

Conclusion/Significance

These evaluations provide a guide for choosing methods of DNA extraction from western corn rootworm beetles based on expected DNA yield and quality, extraction time, cost, and waste control. The extraction conditions for this mid-size insect were optimized. The DNA extracted by the five methods was suitable for further molecular applications such as PCR and sequencing by synthesis.  相似文献   

20.
In recent years there have been tremendous advances in our ability to rapidly and cost-effectively sequence DNA. This has revolutionized the fields of genetics and biology, leading to a deeper understanding of the molecular events in life processes. The rapid technological advances have enormously expanded sequencing opportunities and applications, but also imposed strains and challenges on steps prior to sequencing and in the downstream process of handling and analysis of these massive amounts of sequence data. Traditionally, sequencing has been limited to small DNA fragments of approximately one thousand bases (derived from the organism's genome) due to issues in maintaining a high sequence quality and accuracy for longer read lengths. Although many technological breakthroughs have been made, currently the commercially available massively parallel sequencing methods have not been able to resolve this issue. However, recent announcements in nanopore sequencing hold the promise of removing this read-length limitation, enabling sequencing of larger intact DNA fragments. The ability to sequence longer intact DNA with high accuracy is a major stepping stone towards greatly simplifying the downstream analysis and increasing the power of sequencing compared to today. This review covers some of the technical advances in sequencing that have opened up new frontiers in genomics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号