首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
DNA-microarray technology is a powerful tool to explore the potentials of functional genomics. Almost a decade back, it was not so easy to trace the molecular pathway process in the pathogenesis of diseases. Today DNA-microarray technology allows identifying candidate genes that are implicated in the development and progression of diseases. Because this technology is new, it remains a challenge to perform DNA-microarray experiments in the laboratory. All the technical factors associated with DNA-microarray experiments have a strong impact on the results and therefore, all parts/steps of the protocol, in particular sample preparation (i.e., RNA isolation, RNA preamplification, and cDNA labeling), hybridization, washing, and scanning settings need to be checked and optimized to obtain reliable results from DNA-microarray experiments.  相似文献   

2.
Current genotype-calling methods such as Robust Linear Model with Mahalanobis Distance Classifier (RLMM) and Corrected Robust Linear Model with Maximum Likelihood Classification (CRLMM) provide accurate calling results for Affymetrix Single Nucleotide Polymorphisms (SNP) chips. However, these methods are computationally expensive as they employ preprocess procedures, including chip data normalization and other sophisticated statistical techniques. In the small sample case the accuracy rate may drop significantly. We develop a new genotype calling method for Affymetrix 100 k and 500 k SNP chips. A two-stage classification scheme is proposed to obtain a fast genotype calling algorithm. The first stage uses unsupervised classification to quickly discriminate genotypes with high accuracy for the majority of the SNPs. And the second stage employs a supervised classification method to incorporate allele frequency information either from the HapMap data or from a self-training scheme. Confidence score is provided for every genotype call. The overall performance is shown to be comparable to that of CRLMM as verified by the known gold standard HapMap data and is superior in small sample cases. The new algorithm is computationally simple and standalone in the sense that a self-training scheme can be used without employing any other training data. A package implementing the calling algorithm is freely available at http://www.sfs.ecnu.edu.cn/teachers/xuj_en.html.  相似文献   

3.
Peptide microarrays are useful tools for the characterization of humoral responses against peptide antigens. The study of post-translational modifications requires the printing of appropriately modified peptides, whose synthesis can be time-consuming and expensive. We describe here a method named "chips from chips", which allows probing the presence of antibodies directed toward modified peptide antigens starting from unmodified peptide microarrays. The chip from chip concept is based on the modification of peptide microspots by simple chemical reactions. The starting peptide chip (parent chip) is covered by the reagent solution, thereby allowing the modification of specific residues to occur, resulting in the production of a modified peptide chip (daughter chip). Both parent and daughter chips can then be used for interaction studies. The method is illustrated using reductive methylation for converting lysines into dimethyllysines. The rate of methylation was studied using specific antibodies and fluorescence detection, or surface-assisted laser desorption ionization mass spectrometry. This later technique showed unambiguously the efficient methylation of the peptide probes. The method was then used to study the humoral response against the Mycobacterium tuberculosis heparin-binding hemagglutinin, a methylated surface-associated virulence factor and powerful diagnostic and protective antigen.  相似文献   

4.
Angiotensin I-converting enzyme (ACE), one of the central components of the renin-angiotensin system, is a key therapeutic target for the treatment of hypertension and cardiovascular disorders. Human somatic ACE (sACE) has two homologous domains (N and C). The N- and C-domain catalytic sites have different activities toward various substrates. Moreover, some of the undesirable side effects of the currently available and widely used ACE inhibitors may arise from their targeting both domains leading to defects in other pathways. In addition, structural studies have shown that although both these domains have much in common at the inhibitor binding site, there are significant differences and these are greater at the peptide binding sites than regions distal to the active site. As a model system, we have used an ACE homologue from Drosophila melanogaster (AnCE, a single domain protein with ACE activity) to study ACE inhibitor binding. In an extensive study, we present high-resolution structures for native AnCE and in complex with six known antihypertensive drugs, a novel C-domain sACE specific inhibitor, lisW-S, and two sACE domain-specific phosphinic peptidyl inhibitors, RXPA380 and RXP407 (i.e., nine structures). These structures show detailed binding features of the inhibitors and highlight subtle changes in the orientation of side chains at different binding pockets in the active site in comparison with the active site of N- and C-domains of sACE. This study provides information about the structure-activity relationships that could be utilized for designing new inhibitors with improved domain selectivity for sACE.  相似文献   

5.
The Illumina Infinium HumanMethylation450 BeadChip has emerged as one of the most popular platforms for genome wide profiling of DNA methylation. While the technology is wide-spread, systematic technical biases are believed to be present in the data. For example, this array incorporates two different chemical assays, i.e., Type I and Type II probes, which exhibit different technical characteristics and potentially complicate the computational and statistical analysis. Several normalization methods have been introduced recently to adjust for possible biases. However, there is considerable debate within the field on which normalization procedure should be used and indeed whether normalization is even necessary. Yet despite the importance of the question, there has been little comprehensive comparison of normalization methods. We sought to systematically compare several popular normalization approaches using the Norwegian Mother and Child Cohort Study (MoBa) methylation data set and the technical replicates analyzed with it as a case study. We assessed both the reproducibility between technical replicates following normalization and the effect of normalization on association analysis. Results indicate that the raw data are already highly reproducible, some normalization approaches can slightly improve reproducibility, but other normalization approaches may introduce more variability into the data. Results also suggest that differences in association analysis after applying different normalizations are not large when the signal is strong, but when the signal is more modest, different normalizations can yield very different numbers of findings that meet a weaker statistical significance threshold. Overall, our work provides useful, objective assessment of the effectiveness of key normalization methods.  相似文献   

6.
《Epigenetics》2013,8(2):318-329
The Illumina Infinium HumanMethylation450 BeadChip has emerged as one of the most popular platforms for genome wide profiling of DNA methylation. While the technology is wide-spread, systematic technical biases are believed to be present in the data. For example, this array incorporates two different chemical assays, i.e., Type I and Type II probes, which exhibit different technical characteristics and potentially complicate the computational and statistical analysis. Several normalization methods have been introduced recently to adjust for possible biases. However, there is considerable debate within the field on which normalization procedure should be used and indeed whether normalization is even necessary. Yet despite the importance of the question, there has been little comprehensive comparison of normalization methods. We sought to systematically compare several popular normalization approaches using the Norwegian Mother and Child Cohort Study (MoBa) methylation data set and the technical replicates analyzed with it as a case study. We assessed both the reproducibility between technical replicates following normalization and the effect of normalization on association analysis. Results indicate that the raw data are already highly reproducible, some normalization approaches can slightly improve reproducibility, but other normalization approaches may introduce more variability into the data. Results also suggest that differences in association analysis after applying different normalizations are not large when the signal is strong, but when the signal is more modest, different normalizations can yield very different numbers of findings that meet a weaker statistical significance threshold. Overall, our work provides useful, objective assessment of the effectiveness of key normalization methods.  相似文献   

7.
Autoregulatory feedback loops, where the protein expressed from a gene inhibits or activates its own expression are common gene network motifs within cells. In these networks, stochastic fluctuations in protein levels are attributed to two factors: intrinsic noise (i.e., the randomness associated with mRNA/protein expression and degradation) and extrinsic noise (i.e., the noise caused by fluctuations in cellular components such as enzyme levels and gene-copy numbers). We present results that predict the level of both intrinsic and extrinsic noise in protein numbers as a function of quantities that can be experimentally determined and/or manipulated, such as the response time of the protein and the level of feedback strength. In particular, we show that for a fixed average number of protein molecules, decreasing response times leads to attenuation of both protein intrinsic and extrinsic noise, with the extrinsic noise being more sensitive to changes in the response time. We further show that for autoregulatory networks with negative feedback, the protein noise levels can be minimal at an optimal level of feedback strength. For such cases, we provide an analytical expression for the highest level of noise suppression and the amount of feedback that achieves this minimal noise. These theoretical results are shown to be consistent and explain recent experimental observations. Finally, we illustrate how measuring changes in the protein noise levels as the feedback strength is manipulated can be used to determine the level of extrinsic noise in these gene networks.  相似文献   

8.
High‐density SNP microarrays (“SNP chips”) are a rapid, accurate and efficient method for genotyping several hundred thousand polymorphisms in large numbers of individuals. While SNP chips are routinely used in human genetics and in animal and plant breeding, they are less widely used in evolutionary and ecological research. In this article, we describe the development and application of a high‐density Affymetrix Axiom chip with around 500,000 SNPs, designed to perform genomics studies of great tit (Parus major) populations. We demonstrate that the per‐SNP genotype error rate is well below 1% and that the chip can also be used to identify structural or copy number variation. The chip is used to explore the genetic architecture of exploration behaviour (EB), a personality trait that has been widely studied in great tits and other species. No SNPs reached genomewide significance, including at DRD4, a candidate gene. However, EB is heritable and appears to have a polygenic architecture. Researchers developing similar SNP chips may note: (i) SNPs previously typed on alternative platforms are more likely to be converted to working assays; (ii) detecting SNPs by more than one pipeline, and in independent data sets, ensures a high proportion of working assays; (iii) allele frequency ascertainment bias is minimized by performing SNP discovery in individuals from multiple populations; and (iv) samples with the lowest call rates tend to also have the greatest genotyping error rates.  相似文献   

9.
10.
We developed a rapid and simple method to identify single-nucleotide polymorphisms (SNPs) in the human mitochondrial tRNA genes. This method is based on a universal, functionalized, self-assembled monolayer, XNA on Gold chip platform. A set of probes sharing a given allele-specific sequence with a single base substitution near the middle of the sequence was immobilized on chips and the chips were then hybridized with fluorescence-labeled reference targets produced by asymmetric polymerase chain reaction from patient DNA. The ratio of the hybridization signals from the reference and test targets with each probe was then calculated. A ratio of above 3 indicates the presence of a wild-type sequence and a ratio of below 0.3 indicates a mutant sequence. We tested the sensitivity of the chip for known mutations in tRNA(Leu(UUR)) and tRNA(Lys) genes and found that it can also be used to discriminate multiple mutations and heteroplasmy, two typical features of human mitochondrial DNA. The XNA on Gold biochip method is a simple and rapid microarray method that can be used to test rapidly and reliably any SNP in the mitochondrial genome or elsewhere. It will be particularly useful for detecting SNPs associated with human diseases.  相似文献   

11.
Well-defined relationships between oligonucleotide properties and hybridization signal intensities (HSI) can aid chip design, data normalization and true biological knowledge discovery. We clarify these relationships using the data from two microarray experiments containing over three million probes from 48 high-density chips. We find that melting temperature (Tm) has the most significant effect on HSI while length for the long oligonucleotides studied has very little effect. Analysis of positional effect using a linear model provides evidence that the protruding ends of probes contribute more than tethered ends to HSI, which is further validated by specifically designed match fragment sliding and extension experiments. The impact of sequence similarity (SeqS) on HSI is not significant in comparison with other oligonucleotide properties. Using regression and regression tree analysis, we prioritize these oligonucleotide properties based on their effects on HSI. The implications of our discoveries for the design of unbiased oligonucleotides are discussed. We propose that isothermal probes designed by varying the length is a viable strategy to reduce sequence bias, though imposing selection constraints on other oligonucleotide properties is also essential.  相似文献   

12.
DNA methylation plays an important role in disease etiology. The Illumina Infinium HumanMethylation450 (450K) BeadChip is a widely used platform in large-scale epidemiologic studies. This platform can efficiently and simultaneously measure methylation levels at ∼480,000 CpG sites in the human genome in multiple study samples. Due to the intrinsic chip design of 2 types of chemistry probes, data normalization or preprocessing is a critical step to consider before data analysis. To date, numerous methods and pipelines have been developed for this purpose, and some studies have been conducted to evaluate different methods. However, validation studies have often been limited to a small number of CpG sites to reduce the variability in technical replicates. In this study, we measured methylation on a set of samples using both whole-genome bisulfite sequencing (WGBS) and 450K chips. We used WGBS data as a gold standard of true methylation states in cells to compare the performances of 8 normalization methods for 450K data on a genome-wide scale. Analyses on our dataset indicate that the most effective methods are peak-based correction (PBC) and quantile normalization plus β-mixture quantile normalization (QN.BMIQ). To our knowledge, this is the first study to systematically compare existing normalization methods for Illumina 450K data using novel WGBS data. Our results provide a benchmark reference for the analysis of DNA methylation chip data, particularly in white blood cells.  相似文献   

13.
Oligonucleotide microarrays or oDNA chips are effective decoding and analytical tools for genomic sequences and are useful for a broad range of applications. Therefore, it is desirable to have synthesis methods of DNA chips that are highly flexible in sequence design and provide high quality and general adoptability. We report herein, DNA microarray synthesis based on a flexible biochip method. Our method simply uses photogenerated acid (PGA) in solution to trigger deprotection of the 5′-OH group in conventional nucleotide phosphoramidite monomers (i.e. PGA-gated deprotection), with the rest of the reactions in the synthesis cycle the same as those used for routine synthesis of oligonucleotides. The complete DNA chip synthesis process is accomplished on a regular DNA synthesizer that is coupled with a UV-VIS projection display unit for performing digital photolithography. Using this method, oDNA chips containing probes of newly discovered genes can be quickly and easily synthesized at high yields in a conventional laboratory setting. Furthermore, the PGA-gated chemistry should be applicable to microarray syntheses of a variety of combinatorial molecules, such as peptides and organic molecules.  相似文献   

14.

Background

Genotype imputation from low-density (LD) to high-density single nucleotide polymorphism (SNP) chips is an important step before applying genomic selection, since denser chips tend to provide more reliable genomic predictions. Imputation methods rely partially on linkage disequilibrium between markers to infer unobserved genotypes. Bos indicus cattle (e.g. Nelore breed) are characterized, in general, by lower levels of linkage disequilibrium between genetic markers at short distances, compared to taurine breeds. Thus, it is important to evaluate the accuracy of imputation to better define which imputation method and chip are most appropriate for genomic applications in indicine breeds.

Methods

Accuracy of genotype imputation in Nelore cattle was evaluated using different LD chips, imputation software and sets of animals. Twelve commercial and customized LD chips with densities ranging from 7 K to 75 K were tested. Customized LD chips were virtually designed taking into account minor allele frequency, linkage disequilibrium and distance between markers. Software programs FImpute and BEAGLE were applied to impute genotypes. From 995 bulls and 1247 cows that were genotyped with the Illumina® BovineHD chip (HD), 793 sires composed the reference set, and the remaining 202 younger sires and all the cows composed two separate validation sets for which genotypes were masked except for the SNPs of the LD chip that were to be tested.

Results

Imputation accuracy increased with the SNP density of the LD chip. However, the gain in accuracy with LD chips with more than 15 K SNPs was relatively small because accuracy was already high at this density. Commercial and customized LD chips with equivalent densities presented similar results. FImpute outperformed BEAGLE for all LD chips and validation sets. Regardless of the imputation software used, accuracy tended to increase as the relatedness between imputed and reference animals increased, especially for the 7 K chip.

Conclusions

If the Illumina® BovineHD is considered as the target chip for genomic applications in the Nelore breed, cost-effectiveness can be improved by genotyping part of the animals with a chip containing around 15 K useful SNPs and imputing their high-density missing genotypes with FImpute.

Electronic supplementary material

The online version of this article (doi:10.1186/s12711-014-0069-1) contains supplementary material, which is available to authorized users.  相似文献   

15.
Membrane proteins remain refractory to standard protein chip analysis. They are typically expressed at low densities in distinct subcellular compartments, their biological activity can depend on assembly into macromolecular complexes in a specific lipid environment. We report here a real-time, label-free method to analyze membrane proteins inserted in isolated native synaptic vesicles. Using surface plasmon resonance-based biomolecular interaction analysis (Biacore), organelle capture from minute quantities of 10,000 g brain supernatant (1-10 microg) was monitored. Immunological and morphological characterization indicated that pure intact synaptic vesicles were immobilized on sensor chips. Vesicle chips were stable for days, allowing repetitive use with multiple analytes. This method provides an efficient way in which to characterize organelle membrane components in their native context. Organelle chips allow a broad range of measurements, including interactions of exogenous ligands with the organelle surface (kinetics, Kd), and protein profiling.  相似文献   

16.
Microfluidic chips have been widely used to probe the mechanical properties of cells, which are recognized as a promising label-free biomarker for some diseases. In our previous work (Ye et al., 2018), we have studied the relationships between the transit time and the mechanical properties of a cell flowing through a microchannel with a single constriction, which potentially forms a basis for a microfluidic chip to measure cell’s mechanical properties. Here, we investigate this microfluidic chip design and examine its potential in performances. We first develop the simultaneous dependence of the transit time on both the shear and bending moduli of a cell, and then examine the chip sensitivity with respect to the cell mechanical properties while serializing a single constriction along the flow direction. After that, we study the effect of the flow velocity on the transit time, and also test the chip’s ability to identify heterogeneous cells with different mechanical properties. The results show that the microfluidic chip designed is capable of identifying heterogeneous cells, even when only one unhealthy cell is included. The serialization of chip can greatly increase the chip sensitivity with respect to the mechanical properties of cells. The flow with a higher velocity helps in not only promoting the chip throughput, but also in providing more accurate transit time measurements, because the cell prefers a symmetric deformation under a high velocity.  相似文献   

17.
This study aimed to characterize the linguistic interference that occurs during speech-in-speech comprehension by combining offline and online measures, which included an intelligibility task (at a −5 dB Signal-to-Noise Ratio) and 2 lexical decision tasks (at a −5 dB and 0 dB SNR) that were performed with French spoken target words. In these 3 experiments we always compared the masking effects of speech backgrounds (i.e., 4-talker babble) that were produced in the same language as the target language (i.e., French) or in unknown foreign languages (i.e., Irish and Italian) to the masking effects of corresponding non-speech backgrounds (i.e., speech-derived fluctuating noise). The fluctuating noise contained similar spectro-temporal information as babble but lacked linguistic information. At −5 dB SNR, both tasks revealed significantly divergent results between the unknown languages (i.e., Irish and Italian) with Italian and French hindering French target word identification to a similar extent, whereas Irish led to significantly better performances on these tasks. By comparing the performances obtained with speech and fluctuating noise backgrounds, we were able to evaluate the effect of each language. The intelligibility task showed a significant difference between babble and fluctuating noise for French, Irish and Italian, suggesting acoustic and linguistic effects for each language. However, the lexical decision task, which reduces the effect of post-lexical interference, appeared to be more accurate, as it only revealed a linguistic effect for French. Thus, although French and Italian had equivalent masking effects on French word identification, the nature of their interference was different. This finding suggests that the differences observed between the masking effects of Italian and Irish can be explained at an acoustic level but not at a linguistic level.  相似文献   

18.
Genome-wide association studies are revolutionizing the search for the genes underlying human complex diseases. The main decisions to be made at the design stage of these studies are the choice of the commercial genotyping chip to be used and the numbers of case and control samples to be genotyped. The most common method of comparing different chips is using a measure of coverage, but this fails to properly account for the effects of sample size, the genetic model of the disease, and linkage disequilibrium between SNPs. In this paper, we argue that the statistical power to detect a causative variant should be the major criterion in study design. Because of the complicated pattern of linkage disequilibrium (LD) in the human genome, power cannot be calculated analytically and must instead be assessed by simulation. We describe in detail a method of simulating case-control samples at a set of linked SNPs that replicates the patterns of LD in human populations, and we used it to assess power for a comprehensive set of available genotyping chips. Our results allow us to compare the performance of the chips to detect variants with different effect sizes and allele frequencies, look at how power changes with sample size in different populations or when using multi-marker tags and genotype imputation approaches, and how performance compares to a hypothetical chip that contains every SNP in HapMap. A main conclusion of this study is that marked differences in genome coverage may not translate into appreciable differences in power and that, when taking budgetary considerations into account, the most powerful design may not always correspond to the chip with the highest coverage. We also show that genotype imputation can be used to boost the power of many chips up to the level obtained from a hypothetical “complete” chip containing all the SNPs in HapMap. Our results have been encapsulated into an R software package that allows users to design future association studies and our methods provide a framework with which new chip sets can be evaluated.  相似文献   

19.
Piepho HP 《Genetics》2005,171(1):359-364
Heterosis is defined as the superiority of a hybrid cross over its two parents. Plant and animals breeders have long been exploiting heterosis, but the causes of this phenomenon are as yet only partly understood. Recently, chip technology has opened up the opportunity to study heterosis at the gene expression level. This article considers the cDNA chip technology, which allows assaying two genotypes simultaneously on the same chip. Heterosis involves the response of at least three genotypes (two parents and their hybrid), so a chip or microarray constitutes an incomplete block, which raises a design problem specific to heterosis studies. The question to be answered is how genotype pairs should be allocated to chips. We address this design problem for two types of heterosis: midparent heterosis and better-parent heterosis. The general picture emerging from our results is that most of the resources should be allocated to parent-hybrid pairs, while chips with parent-parent pairs or hybrid-reciprocal pairs should be used sparingly or not at all.  相似文献   

20.
It has been hypothesized that continuously releasing drug molecules into the tumor over an extended period of time may significantly improve the chemotherapeutic efficacy by overcoming physical transport limitations of conventional bolus drug treatment. In this paper, we present a generalized space- and time-dependent mathematical model of drug transport and drug-cell interactions to quantitatively formulate this hypothesis. Model parameters describe: perfusion and tissue architecture (blood volume fraction and blood vessel radius); diffusion penetration distance of drug (i.e., a function of tissue compactness and drug uptake rates by tumor cells); and cell death rates (as function of history of drug uptake). We performed preliminary testing and validation of the mathematical model using in vivo experiments with different drug delivery methods on a breast cancer mouse model. Experimental data demonstrated a 3-fold increase in response using nano-vectored drug vs. free drug delivery, in excellent quantitative agreement with the model predictions. Our model results implicate that therapeutically targeting blood volume fraction, e.g., through vascular normalization, would achieve a better outcome due to enhanced drug delivery.

Author Summary

Cancer treatment efficacy can be significantly enhanced through the elution of drug from nano-carriers that can temporarily stay in the tumor vasculature. Here we present a relatively simple yet powerful mathematical model that accounts for both spatial and temporal heterogeneities of drug dosing to help explain, examine, and prove this concept. We find that the delivery of systemic chemotherapy through a certain form of nano-carriers would have enhanced tumor kill by a factor of 2 to 4 over the standard therapy that the patients actually received. We also find that targeting blood volume fraction (a parameter of the model) through vascular normalization can achieve more effective drug delivery and tumor kill. More importantly, this model only requires a limited number of parameters which can all be readily assessed from standard clinical diagnostic measurements (e.g., histopathology and CT). This addresses an important challenge in current translational research and justifies further development of the model towards clinical translation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号