首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Single amplified genomes and genomes assembled from metagenomes have enabled the exploration of uncultured microorganisms at an unprecedented scale. However, both these types of products are plagued by contamination. Since these genomes are now being generated in a high-throughput manner and sequences from them are propagating into public databases to drive novel scientific discoveries, rigorous quality controls and decontamination protocols are urgently needed. Here, we present ProDeGe (Protocol for fully automated Decontamination of Genomes), the first computational protocol for fully automated decontamination of draft genomes. ProDeGe classifies sequences into two classes—clean and contaminant—using a combination of homology and feature-based methodologies. On average, 84% of sequence from the non-target organism is removed from the data set (specificity) and 84% of the sequence from the target organism is retained (sensitivity). The procedure operates successfully at a rate of ~0.30 CPU core hours per megabase of sequence and can be applied to any type of genome sequence.Recent technological advancements have enabled the large-scale sampling of genomes from uncultured microbial taxa, through the high-throughput sequencing of single amplified genomes (SAGs; Rinke et al., 2013; Swan et al., 2013) and assembly and binning of genomes from metagenomes (GMGs; Cuvelier et al., 2010; Sharon and Banfield, 2013). The importance of these products in assessing community structure and function has been established beyond doubt (Kalisky and Quake, 2011). Multiple Displacement Amplification (MDA) and sequencing of single cells has been immensely successful in capturing rare and novel phyla, generating valuable references for phylogenetic anchoring. However, efforts to conduct MDA and sequencing in a high-throughput manner have been heavily impaired by contamination from DNA introduced by the environmental sample, as well as introduced during the MDA or sequencing process (Woyke et al., 2011; Engel et al., 2014; Field et al., 2014). Similarly, metagenome binning and assembly often carries various errors and artifacts depending on the methods used (Nielsen et al., 2014). Even cultured isolate genomes have been shown to lack immunity to contamination with other species (Parks et al., 2014; Mukherjee et al., 2015). As sequencing of these genome product types rapidly increases, contaminant sequences are finding their way into public databases as reference sequences. It is therefore extremely important to define standardized and automated protocols for quality control and decontamination, which would go a long way towards establishing quality standards for all microbial genome product types.Current procedures for decontamination and quality control of genome sequences in single cells and metagenome bins are heavily manual and can consume hours/megabase when performed by expert biologists. Supervised decontamination typically involves homology-based inspection of ribosomal RNA sequences and protein coding genes, as well as visual analysis of k-mer frequency plots and guanine–cytosine content (Clingenpeel, 2015). Manual decontamination is also possible through the software SmashCell (Harrington et al., 2010), which contains a tool for visual identification of contaminants from a self-organizing map and corresponding U-matrix. Another existing software tool, DeconSeq (Schmieder and Edwards, 2011), automatically removes contaminant sequences, however, the contaminant databases are required input. The former lacks automation, whereas the latter requires prior knowledge of contaminants, rendering both applications impractical for high-throughput decontamination.Here, we introduce ProDeGe, the first fully automated computational protocol for decontamination of genomes. ProDeGe uses a combination of homology-based and sequence composition-based approaches to separate contaminant sequences from the target genome draft. It has been pre-calibrated to discard at least 84% of the contaminant sequence, which results in retention of a median 84% of the target sequence. The standalone software is freely available at http://prodege.jgi-psf.org//downloads/src and can be run on any system that has Perl, R (R Core Team, 2014), Prodigal (Hyatt et al., 2010) and NCBI Blast (Camacho et al., 2009) installed. A graphical viewer allowing further exploration of data sets and exporting of contigs accompanies the web application for ProDeGe at http://prodege.jgi-psf.org, which is open to the wider scientific community as a decontamination service (Supplementary Figure S1).The assembly and corresponding NCBI taxonomy of the data set to be decontaminated are required inputs to ProDeGe (Figure 1a). Contigs are annotated with genes following which, eukaryotic contamination is removed based on homology of genes at the nucleotide level using the eukaryotic subset of NCBI''s Nucleotide database as the reference. For detecting prokaryotic contamination, a curated database of reference contigs from the set of high-quality genomes within the Integrated Microbial Genomes (IMG; Markowitz et al., 2014) system is used as the reference. This ensures that errors in public reference databases due to poor quality of sequencing, assembly and annotation do not negatively impact the decontamination process. Contigs determined as belonging to the target organism based on nucleotide level homology to sequences in the above database are defined as ‘Clean'', whereas those aligned to other organisms are defined as ‘Contaminant''. Contigs whose origin cannot be determined based on alignment are classified as ‘Undecided''. Classified clean and contaminated contigs are used to calibrate the separation in the subsequent 5-mer based binning module, which classifies undecided contigs as ‘Clean'' or ‘Contaminant'' using principal components analysis (PCA) of 5-mer frequencies. This parameter can also be specified by the user. When data sets do not have taxonomy deeper than phylum level, or a single confident taxonomic bin cannot be detected using sequence alignment, solely 9-mer based binning is used due to more accurate overall classification. In the absence of a user-defined cutoff, a pre-calibrated cutoff for 80% or more specificity separates the clean contigs from contaminated sequences in the resulting PCA of the 9-mer frequency matrix. Details on ProDeGe''s custom database, evaluation of the performance of the system and exploration of the parameter space to calibrate ProDeGe for a high accurate classification rate are provided in the Supplementary Material.Open in a separate windowFigure 1(a) Schematic overview of the ProDeGe engine. (b) Features of data sets used to validate ProDeGe: SAGs from the Arabidopsis endophyte sequencing project, MDM project, public data sets found in IMG but not sequenced at the JGI, as well as genomes from metagenomes. All the data and results can be found in Supplementary Table S3.The performance of ProDeGe was evaluated using 182 manually screened SAGs (Figure 1b,Supplementary Table S1) from two studies whose data sets are publicly available within the IMG system: genomes of 107 SAGs from an Arabidopsis endophyte sequencing project and 75 SAGs from the Microbial Dark Matter (MDM) project* (only 75/201 SAGs from the MDM project had 1:1 mapping between contigs in the unscreened and the manually screened versions, hence these were used; Rinke et al., 2013). Manual curation of these SAGs demonstrated that the use of ProDeGe prevented 5311 potentially contaminated contigs in these data sets from entering public databases. Figure 2a demonstrates the sensitivity vs specificity plot of ProDeGe results for the above data sets. Most of the data points in Figure 2a cluster in the top right of the box reflecting a median retention of 89% of the clean sequence (sensitivity) and a median rejection of 100% of the sequence of contaminant origin (specificity). In addition, on average, 84% of the bases of a data set are accurately classified. ProDeGe performs best when the target organism has sequenced homologs at the class level or deeper in its high-quality prokaryotic nucleotide reference database. If the target organism''s taxonomy is unknown or not deeper than domain level, or there are few contigs with taxonomic assignments, a target bin cannot be assessed and thus ProDeGe removes contaminant contigs using sequence composition only. The few samples in Figure 2a that demonstrate a higher rate of false positives (lower specificity) and/or reduced sensitivity typically occur when the data set contains few contaminant contigs or ProDeGe incorrectly assumes that the largest bin is the target bin. Some data sets contain a higher proportion of contamination than target sequence and ProDeGe''s performance can suffer under this condition. However, under all other conditions, ProDeGe demonstrates high speed, specificity and sensitivity (Figure 2). In addition, ProDeGe demonstrates better performance in overall classification when nucleotides are considered than when contigs are considered, illustrating that longer contigs are more accurately classified (Supplementary Table S1).Open in a separate windowFigure 2ProDeGe accuracy and performance scatterplots of 182 manually curated single amplified genomes (SAGs), where each symbol represents one SAG data set. (a) Accuracy shown by sensitivity (proportion of bases confirmed ‘Clean'') vs specificity (proportion of bases confirmed ‘Contaminant'') from the Endophyte and Microbial Dark Matter (MDM) data sets. Symbol size reflects input data set size in megabases. Most points cluster in the top right of the plot, showing ProDeGe''s high accuracy. Median and average overall results are shown in Supplementary Table S1. (b) ProDeGe completion time in central processing unit (CPU) core hours for the 182 SAGs. ProDeGe operates successfully at an average rate of 0.30 CPU core hours per megabase of sequence. Principal components analysis (PCA) of a 9-mer frequency matrix costs more computationally than PCA of a 5-mer frequency matrix used with blast-binning. The lack of known taxonomy for the MDM data sets prevents blast-binning, thus showing longer finishing times than the endophyte data sets, which have known taxonomy for use in blast-binning.All SAGs used in the evaluation of ProDeGe were assembled using SPAdes (Bankevich et al., 2012). In-house testing has shown that reads assembled with SPAdes from different strains or even slightly divergent species of the same genera may be combined into the same contig (Personal communications, KT and Robert Bowers). Ideally, the DNA in a well that gets sequenced belongs to a single cell. In the best case, contaminant sequences need to be at least from a different species to be recognized as such by the homology-based screening stage. In the absence of closely related sequenced organisms, contaminant sequences need to be at least from a different genus to be recognized as such by the composition-based screening stage (Supplementary Material). Thus, there is little risk of ProDeGe separating sequences from clonal populations or strains. We have found species- and genus-level contamination in MDA samples to be rare.To evaluate the quality of publicly available uncultured genomes, ProDeGe was used to screen 185 SAGs and 14 GMGs (Figure 1b). Compared with CheckM (Parks et al., 2014), a tool which calculates an estimate of genome sequence contamination using marker genes, ProDeGe generally marks a higher proportion of sequence as ‘Contaminant'' (Supplementary Table S2). This is because ProDeGe has been calibrated to perform at high specificity levels. The command line version of ProDeGe allows users to conduct their own calibration and specify a user-defined distance cutoff. Further, CheckM only outputs the proportion of contamination, but ProDeGe actually labels each contig as ‘Clean'' or ‘Contaminant'' during the process of automated removal.The web application for ProDeGe allows users to export clean and contaminant contigs, examine contig gene calls with their corresponding taxonomies, and discover contig clusters in the first three components of their k-dimensional space. Non-linear approaches for dimensionality reduction of k-mer vectors are gaining popularity (van der Maaten and Hinton, 2008), but we observed no systematic advantage of using t-Distributed Stochastic Neighbor Embedding over PCA (Supplementary Figure S2).ProDeGe is the first step towards establishing a standard for quality control of genomes from both cultured and uncultured microorganisms. It is valuable for preventing the dissemination of contaminated sequence data into public databases, avoiding resulting misleading analyses. The fully automated nature of the pipeline relieves scientists of hours of manual screening, producing reliably clean data sets and enabling the high-throughput screening of data sets for the first time. ProDeGe, therefore, represents a critical component in our toolkit during an era of next-generation DNA sequencing and cultivation-independent microbial genomics.  相似文献   

2.
3.
The Canadian light source is a 2.9 GeV national synchrotron radiation facility located on the University of Saskatchewan campus in Saskatoon. The small-gap in-vacuum undulator illuminated beamline, 08ID-1, together with the bending magnet beamline, 08B1-1, constitute the Canadian Macromolecular Crystallography Facility (CMCF). The CMCF provides service to more than 50 Principal Investigators in Canada and the United States. Up to 25% of the beam time is devoted to commercial users and the general user program is guaranteed up to 55% of the useful beam time through a peer-review process. CMCF staff provides "Mail-In" crystallography service to users with the highest scored proposals. Both beamlines are equipped with very robust end-stations including on-axis visualization systems, Rayonix 300 CCD series detectors and Stanford-type robotic sample auto-mounters. MxDC, an in-house developed beamline control system, is integrated with a data processing module, AutoProcess, allowing full automation of data collection and data processing with minimal human intervention. Sample management and remote monitoring of experiments is enabled through interaction with a Laboratory Information Management System developed at the facility.  相似文献   

4.
Kidney stones are a common problem for which inadequate prevention exists. We recruited ten recurrent kidney stone formers with documented calcium oxalate stones into a two phased study to assess safety and effectiveness of Cystone®, an herbal treatment for prevention of kidney stones. The first phase was a randomized double-blinded 12 week cross over study assessing the effect of Cystone® vs. placebo on urinary supersaturation. The second phase was an open label one year study of Cystone® to determine if renal stone burden decreased, as assessed by quantitative and subjective assessment of CT. Results revealed no statistically significant effect of Cystone® on urinary composition short (6 weeks) or long (52 weeks) term. Average renal stone burden increased rather than decreased on Cystone®. Therefore, this study does not support the efficacy of Cystone® to treat calcium oxalate stone formers. Future studies will be needed to assess effects on stone passage, or on other stone types.  相似文献   

5.
6.
A new method making use of a radiochemical enzyme assay at the single cell level is presented to investigate metabolic cooperation, a widely studied form of cellular communication. In this case metabolic cooperation between normal human fibroblasts and fibroblasts derived from a patient deficient for the enzyme hypoxanthine-guanine phosphoribosyl transferase has been studied.A mixture of an equal number of both cell types was cultured in close physical contact and after trypsinisation, replating and culturing the cells for several hours in a high dilution, quantitative enzyme measurements with individual cells isolated from the mixture were carried out. From the distribution curve of the enzyme activities of the individual cells the conclusion could be drawn that a macromolecule, either the enzyme itself or DNA or mRNA, coding for that enzyme, is transferred from normal to mutant cells.  相似文献   

7.
The ELISpot assay is used for the detection of T cell responses in clinical trials and vaccine evaluations. Standardization and reproducibility are necessary to compare the results worldwide, inter- and intra-assay variability being critical factors. To assure operator safety as well as high-quality experiment performance, the ELISpot assay was implemented on an automated liquid handling platform, a Tecan Freedom EVO. After validation of the liquid handling, automated loading of plates with cells and reagents was investigated. With step by step implementation of the manual procedure and liquid dispensing optimization on the robot platform, a fully automated ELISpot assay was accomplished with plates remaining in the system from the plate blocking step to spot development. The mean delta difference amounted to a maximum of 6%, and the mean dispersion was smaller than in the manual assay. Taken together, we achieved with this system not only a lower personnel attendance but also higher throughput and a more precise and parallelized analysis. This platform has the potential to guarantee validated, safe, fast, reproducible and cost-efficient immunological and toxicological assays in the future.  相似文献   

8.
9.
The in vivo micronucleus assay working group of the International Workshop on Genotoxicity Testing (IWGT) discussed new aspects in the in vivo micronucleus (MN) test, including the regulatory acceptance of data derived from automated scoring, especially with regard to the use of flow cytometry, the suitability of rat peripheral blood reticulocytes to serve as the principal cell population for analysis, the establishment of in vivo MN assays in tissues other than bone marrow and blood (for example liver, skin, colon, germ cells), and the biological relevance of the single-dose-level test. Our group members agreed that flow cytometric systems to detect induction of micronucleated immature erythrocytes have advantages based on the presented data, e.g., they give good reproducibility compared to manual scoring, are rapid, and require only small quantities of peripheral blood. Flow cytometric analysis of peripheral blood reticulocytes has the potential to allow monitoring of chromosome damage in rodents and also other species as part of routine toxicology studies. It appears that it will be applicable to humans as well, although in this case the possible confounding effects of splenic activity will need to be considered closely. Also, the consensus of the group was that any system that meets the validation criteria recommended by the IWGT (2000) should be acceptable. A number of different flow cytometric-based micronucleus assays have been developed, but at the present time the validation data are most extensive for the flow cytometric method using anti-CD71 fluorescent staining especially in terms of inter-laboratory collaborative data. Whichever method is chosen, it is desirable that each laboratory should determine the minimum sample size required to ensure that scoring error is maintained below the level of animal-to-animal variation. In the second IWGT, the potential to use rat peripheral blood reticulocytes as target cells for the micronucleus assay was discussed, but a consensus regarding acceptability for regulatory purposes could not be reached at that time. Subsequent validation efforts, combined with accumulated published data, demonstrate that blood-derived reticulocytes from rats as well as mice are acceptable when young reticulocytes are analyzed under proper assay protocol and sample size. The working group reviewed the results of micronucleus assays using target cells/tissues other than hematopoietic cells. We also discussed the relevance of the liver micronucleus assay using young rats, and the importance of understanding the maturation of enzyme systems involved in the processes of metabolic activation in the liver of young rats. Although the consensus of the group was that the more information with regard to the metabolic capabilities of young rats would be useful, the published literature shows that young rats have sufficient metabolic capacity for the purposes of this assay. The use of young rats as a model for detecting MN induction in the liver offers a good alternative methodology to the use of partial hepatectomy or mitogenic stimulation. Additional data obtained from colon and skin MN models have been integrated into the data bases, enhancing confidence in the utility of these models. A fourth topic discussed by the working group was the regulatory acceptance of the single-dose-level assay. There was no consensus regarding the acceptability of a single dose level protocol when dose-limiting toxicity occurs. The use of a single dose level can lead to problems in data interpretation or to the loss of animals due to unexpected toxicity, making it necessary to repeat the study with additional doses. A limit test at a single dose level is currently accepted when toxicity is not dose-limiting.  相似文献   

10.
11.
An anion-exchange–high-performance liquid chromatography (AE–HPLC) method for the quantification of adenovirus type 5 (Ad5) total particles was validated according to performance criteria of precision, specificity, linearity of calibration and range, limit of detection, limit of quantification, accuracy and recovery. The viral particles were detected by absorbance at 260 nm using photodiode array detector (PDA). Cesium chloride (CsCl) purified Ad5 and lysate samples were used for the validation of the method. Relative standard deviations (RSDs) for the inter-day, intra-day precision and reproducibility for both the lysate and the Ad5 standard were less than 10 and 2% for the peak area and retention time, respectively. The method was specific for Ad5 which was eluted at 8.0 min. The presence of DNA does not affect the recovery of Ad5 particles for accurate quantification. Based on the error in prediction to be less than 10%, the working range was established between 2×1010 and 7×1011 VP/ml with correlation coefficient of 0.99975, standard deviation of 6.14×109 VP/ml and a slope of 3.04×105 VP/ml. The recovery of the method varied between 88 and 106% in all of the lysate samples investigated which is statistically similar to 100% recovery at 95% confidence interval.  相似文献   

12.
The submerged macrophytes of Lake Maarsseveen I were surveyed in 1983 using SCUBA diving techniques. Only 40% of the characeans and 75% of the angiosperms detected in 1977 remained. The area colonized by submerged macrophytes was 0.45% in 1983, compared with 25.10% in 1977. The observed decreases were largely attributable to a shift of the plantcolonized areas to shallower depths. By 1983, most of the earlier predominant vegetation types had disappeared and the biomass had decreased. The decline in submerged vegetation may be attributed to increasing eutrophication, fish populations and recreational activities.  相似文献   

13.
M. P. HARRIS  S. WANLESS 《Ibis》1988,130(2):172-192
The breeding of Guillemots was studied in five areas of different breeding density and habitat type on the Isle of May in 1981-86. Prior to 1981 numbers were increasing at 5 6° per annum but during the study the rate of increase slowed down and from 1983 to 1986 numbers were fairly constant. Adult survival was high, with a mean minimum annual adult survival of 930% (s.e. = 03). Observations in 1986 suggested that the percentage return of colour-marked immature birds was low, with only l-6% and 5.5% of second and third year birds being seen. We suggest that poor recruitment was responsible for the levelling off in numbers at the colony.
The timing of laying was constant from year to year in 1981-85 but was later in 1986. It was significantly and inversely related to sea temperature the previous March. There was a consistent ranking in median laying dates amongst the areas, with area 1 (the highest density of birds) always earliest. However, there was no significant difference in synchrony between the areas. Overall breeding success was high (0–71-0-82 young fledged per pair). There was no consistent ranking of breeding success with breeding density, habitat type or laying synchrony.
The only aspect of Guillemot biology which changed significantly was the daily food intake of a chick which approximately halved during the study period. However, this reduction in food intake had no detectable effect on either the weight of chicks with wing lengths greater than 60 mm or the amount of time off-duty breeders spent at the site. Both of these parameters were still consistent with conditions being favourable in 1986.  相似文献   

14.
Many bird populations in temperate regions have advanced their timing of breeding in response to a warming climate in recent decades. However, long‐term trends in temperature differ geographically and between seasons, and so do responses of local breeding populations. Data on breeding bird phenology from subarctic and arctic passerine populations are scarce, and relatively little data has been recorded in open‐nesting species. We investigated the timing of breeding and its relationship to spring temperature of 14 mainly open‐nesting passerine species in subarctic Swedish Lapland over a period of 32 years (1984–2015). We estimated timing of breeding from the progress of post‐juvenile moult in mist‐netted birds, a new method exploring the fact that the progress of post‐juvenile moult correlates with age. Although there was a numerical tendency for earlier breeding in most species (on average ?0.09 days/year), changes were statistically significant in only three species (by ?0.16 to ?0.23 days/year). These figures are relatively low compared with what has been found in other long‐term studies but are similar to a few other studies in subarctic areas. Generally, annual hatching dates were negatively correlated with mean temperature in May. This correlation was stronger in long‐distance than in short‐distance migrants. Although annual temperatures at high northern latitudes have increased over recent decades, there was no long‐term increase in mean temperature in May over the study period at this subarctic site. This is probably the main reason why there were only small long‐term changes in hatching dates.  相似文献   

15.
16.
17.
Microfabricated devices are useful tools for manipulating and interrogating large numbers of single cells in a rapid and cost‐effective manner, but connecting these systems to the existing platforms used in routine high‐throughput screening of libraries of cells remains challenging. Methods to sort individual cells of interest from custom microscale devices to standardized culture dishes in an efficient and automated manner without affecting the viability of the cells are critical. Combining a commercially available instrument for colony picking (CellCelector, AVISO GmbH) and a customized software module, we have established an optimized process for the automated retrieval of individual antibody‐producing cells, secreting desirable antibodies, from dense arrays of subnanoliter containers. The selection of cells for retrieval is guided by data obtained from a high‐throughput, single‐cell screening method called microengraving. Using this system, 100 clones from a mixed population of two cell lines secreting different antibodies (12CA5 and HYB099‐01) were sorted with 100% accuracy (50 clones of each) in ~2 h, and the cells retained viability. © 2010 American Institute of Chemical Engineers Biotechnol. Prog., 2010  相似文献   

18.
Questions: (1) Is climate a strong driver of vegetation dynamics, including interannual variation, in a range margin steppic community? (2) Are there long‐term trends in cover and species richness in this community, and are these consistent across species groups and species within groups? (3) Can long‐term trends in plant community data be related to variation in local climate over the last three decades? Location: A range margin steppic grassland community in central Germany. Methods: Cover, number and size of all individuals of all plant species present in three permanent 1‐m2 plots were recorded in spring for 26 years (1980–2005). Climatic data for the study area were used to determine the best climatic predictor for each plant community, functional group and species variable (annual data and interannual variation) using best subsets regression. Results: April and autumn temperature showed the highest correlation with total cover and species richness and with interannual variations of cover and richness. However, key climate drivers differed between the five most abundant species. Similarly, total cover and number and cover of perennials significantly decreased over time, while no trend was found for the cover and number of annuals. However, within functional groups there were also contrasting species‐specific responses. Long‐term temperature increases and high interannual variability in both temperature and precipitation were strongly related to long‐term trends and interannual variations in plant community data. Conclusions: Temporal trends in vegetation were strongly associated with temporal trends in climate at the study site, with key roles for autumn and spring temperature and precipitation. Dynamics of functional groups and species within groups and their relationships to changes in temperature and precipitation reveal complex long‐term and interannual patterns that cannot be inferred from short‐term studies with only one or a few individual species. Our results also highlight that responses detected at the functional group level may mask contrasting responses within functional groups. We discuss the implications of these findings for attempts to predict the future response of biodiversity to climate change.  相似文献   

19.
Cubera snapper Lutjanus cyanopterus aggregated to spawn at Gladden Spit, a salient sub‐surface reef promontory seaward of the emergent reef and near the continental shelf edge of Belize. Their spawning aggregations typically formed 2 days before to 12 days after full moon from March to September 1998–2003 within a 45 000 m2 reef area. Peak abundance of 4000 to 10 000 individuals was observed between April and July each year, while actual spawning was most frequently observed in May. Spawning was observed consistently from 40 min before to 10 min after sunset within a confined area ≤1000 m2. Data suggested that cubera snapper consistently formed seasonal spawning aggregations in relation to location, photoperiod, water temperature and lunar cycle, and that spawning was cued by time of day but not tides. The cubera snapper aggregation site was included within the Gladden Spit Marine Reserve, a conditional no‐take fishing zone.  相似文献   

20.
Ahn B  Kang D  Kim H  Wei Q 《Molecules and cells》2004,18(2):249-255
DNA repair capacity in a cell could be detected by a host-cell reactivation assay (HCR). Since relation between DNA repair and genetic susceptibility to cancer remains unclear, it is necessary to identify DNA repair defects in human cancer cells. To assess DNA repair for breast cancer susceptibility, we developed a modified HCR assay using a plasmid containing a firefly luciferase gene damaged by mitomycin C (MMC), which forms interstrand cross-link (ICL) adducts. In particular, interstrand cross-link is thought to induce strand breaks being repaired by homologous recombination. The MMC-ICLs were verified by electrophoresis. Damaged plasmids were transfected into apparently normal human lymphocytes and NER-deficient XP cell lines and the DNA repair capacity of the cells were measured by quantifying the activity of the firefly luciferase. MMC lesion was repaired as much as UV adducts in normal lymphocytes and the XPC cells. However, the XPA cells have a lower repair capacity for MMC lesion than the XPC cell, indicating that the XPA protein may be involved in initial damage recognition of MMC-ICL adducts. Since several repair pathways including NER and recombination participate in MMC-ICL removal, this host cell reactivation assay using MMC-ICLs can be used in exploring DNA repair defects in human cancer cells.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号