首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 19 毫秒
1.
DNA microarray experiments have become a widely used tool for studying gene expression. An important, but difficult, part of these experiments is deciding on the appropriate number of biological replicates to use. Often, researchers will want a number of replicates that give sufficient power to recognize regulated genes while controlling the false discovery rate (FDR) at an acceptable level. Recent advances in statistical methodology can now help to resolve this issue. Before using such methods it is helpful to understand the reasoning behind them. In this Research Focus article we explain, in an intuitive way, the effect sample size has on the FDR and power, and then briefly survey some recently proposed methods in this field of research and provide an example of use.  相似文献   

2.
Increased phenotyping accuracy and throughput are necessary to improve our understanding of quantitative variation and to be able to deconstruct complex traits such as those involved in growth responses to the environment. Still, only a few facilities are known to handle individual plants of small stature for non‐destructive, real‐time phenotype acquisition from plants grown in precisely adjusted and variable experimental conditions. Here, we describe Phenoscope, a high‐throughput phenotyping platform that has the unique feature of continuously rotating 735 individual pots over a table. It automatically adjusts watering and is equipped with a zenithal imaging system to monitor rosette size and expansion rate during the vegetative stage, with automatic image analysis allowing manual correction. When applied to Arabidopsis thaliana, we show that rotating the pots strongly reduced micro‐environmental disparity: heterogeneity in evaporation was cut by a factor of 2.5 and the number of replicates needed to detect a specific mild genotypic effect was reduced by a factor of 3. In addition, by controlling a large proportion of the micro‐environmental variance, other tangible sources of variance become noticeable. Overall, Phenoscope makes it possible to perform large‐scale experiments that would not be possible or reproducible by hand. When applied to a typical quantitative trait loci (QTL) mapping experiment, we show that mapping power is more limited by genetic complexity than phenotyping accuracy. This will help to draw a more general picture as to how genetic diversity shapes phenotypic variation.  相似文献   

3.
A sensory panel is often used to profile the same type of product with the same set of attributes for many years. We are interested in characterizing the evolution of the performance of such a panel (and its panelists) over time. This article presents a methodology based on a mixed‐model approach that takes into account the evolution of both panel and panelist in the same model. At the panel level, linear and quadratic evolutions of the performance are tested. At the panelist level, the method allows detection of whether some panelists perform better than others, and whether this difference remains the same or evolves over time. This mixed‐model approach is followed by a graphical representation using a control chart method to identify occasional outliers. Data used to illustrate this methodology are eight sensory profiling data sets collected on ready‐made frozen meals between 1997 and 2001 (every 6 months). The performance index chosen as an example in this study is the individual repeatability measured by standard deviation over replicates.  相似文献   

4.
Conventional statistical methods for interpreting microarray data require large numbers of replicates in order to provide sufficient levels of sensitivity. We recently described a method for identifying differentially-expressed genes in one-channel microarray data 1. Based on the idea that the variance structure of microarray data can itself be a reliable measure of noise, this method allows statistically sound interpretation of as few as two replicates per treatment condition. Unlike the one-channel array, the two-channel platform simultaneously compares gene expression in two RNA samples. This leads to covariation of the measured signals. Hence, by accounting for covariation in the variance model, we can significantly increase the power of the statistical test. We believe that this approach has the potential to overcome limitations of existing methods. We present here a novel approach for the analysis of microarray data that involves modeling the variance structure of paired expression data in the context of a Bayesian framework. We also describe a novel statistical test that can be used to identify differentially-expressed genes. This method, bivariate microarray analysis (BMA), demonstrates dramatically improved sensitivity over existing approaches. We show that with only two array replicates, it is possible to detect gene expression changes that are at best detected with six array replicates by other methods. Further, we show that combining results from BMA with Gene Ontology annotation yields biologically significant results in a ligand-treated macrophage cell system.  相似文献   

5.
6.
The analysis of glycoproteins in body fluids represents a central task in the study of vital processes. Herein, we assessed the combined use of Concanavalin A and Wheat Germ Agglutinin as ligands to fractionate and enrich glycoproteins from oviductal fluid (OF), which is a source of molecules involved in fertilization. First, the selectivity was corroborated by a gel‐based approach using glycoprotein staining and enzymatic deglycosylation. Nanoliquid chromatography‐tandem mass spectrometry (nLC‐ESI‐MS/MS) further allowed the reliable identification of 134 nonbound as well as 130 lectin‐bound OF proteins. Enrichment analysis revealed that 77% of the annotated proteins in the lectin‐bound fraction were known glycoproteins (p‐value [FDR] = 1.45E‐31). The low variance of the number of peptide spectrum matches for each protein within replicates indicated a consistent reproducibility of the whole workflow (median CV 17.3% for technical replicates and 20.7% for biological replicates). Taken together, this study highlights the applicability of a lectin‐based workflow for the comprehensive analysis of OF proteins and gives for the first time an insight into the broad glycoprotein content of OF.  相似文献   

7.
MOTIVATION: Due to advances in experimental technologies, such as microarray, mass spectrometry and nuclear magnetic resonance, it is feasible to obtain large-scale data sets, in which measurements for a large number of features can be simultaneously collected. However, the sample sizes of these data sets are usually small due to their relatively high costs, which leads to the issue of concordance among different data sets collected for the same study: features should have consistent behavior in different data sets. There is a lack of rigorous statistical methods for evaluating this concordance or discordance. METHODS: Based on a three-component normal-mixture model, we propose two likelihood ratio tests for evaluating the concordance and discordance between two large-scale data sets with two sample groups. The parameter estimation is achieved through the expectation-maximization (E-M) algorithm. A normal-distribution-quantile-based method is used for data transformation. RESULTS: To evaluate the proposed tests, we conducted some simulation studies, which suggested their satisfactory performances. As applications, the proposed tests were applied to three SELDI-MS data sets with replicates. One data set has replicates from different platforms and the other two have replicates from the same platform. We found that data generated by SELDI-MS showed satisfactory concordance between replicates from the same platform but unsatisfactory concordance between replicates from different platforms. AVAILABILITY: The R codes are freely available at http://home.gwu.edu/~ylai/research/Concordance.  相似文献   

8.
Variance stabilization is a step in the preprocessing of microarray data that can greatly benefit the performance of subsequent statistical modeling and inference. Due to the often limited number of technical replicates for Affymetrix and cDNA arrays, achieving variance stabilization can be difficult. Although the Illumina microarray platform provides a larger number of technical replicates on each array (usually over 30 randomly distributed beads per probe), these replicates have not been leveraged in the current log2 data transformation process. We devised a variance-stabilizing transformation (VST) method that takes advantage of the technical replicates available on an Illumina microarray. We have compared VST with log2 and Variance-stabilizing normalization (VSN) by using the Kruglyak bead-level data (2006) and Barnes titration data (2005). The results of the Kruglyak data suggest that VST stabilizes variances of bead-replicates within an array. The results of the Barnes data show that VST can improve the detection of differentially expressed genes and reduce false-positive identifications. We conclude that although both VST and VSN are built upon the same model of measurement noise, VST stabilizes the variance better and more efficiently for the Illumina platform by leveraging the availability of a larger number of within-array replicates. The algorithms and Supplementary Data are included in the lumi package of Bioconductor, available at: www.bioconductor.org.  相似文献   

9.
Three sampling methods for estimating abundance and size of blue cod Parapercis colias were compared inside and outside Kapiti Marine Reserve, New Zealand (40° 49′ 31·77′′ S; 174° 55′ 02·87′′ E). Two baited methods, baited underwater video (BUV) and experimental angling (EA), were more efficient and had lower levels of estimate variation than diver‐based underwater visual census (UVC). The BUV and EA recorded more fish and of greater size ranges than UVC, and also had fewer zero count replicates. The BUV and EA methodologies revealed highly significant differences in abundance and size of fish between sites (reserve v. non‐reserve), whereas UVC revealed no such differences. These results indicate that BUV is likely to be the most accurate, cost‐effective and easy to use methodology for the surveying of carnivorous temperate reef fishes for future monitoring. It is noted, however, that new data acquired using the BUV methodology may need to be compared over a calibration period to data acquired using the UVC methodology to ensure that historical data sets derived from UVC still have validity and application for future monitoring activity.  相似文献   

10.
Formalin‐fixed paraffin‐embedded (FFPE) tissue is a rich source of clinically relevant material that can yield important translational biomarker discovery using proteomic analysis. Protocols for analyzing FFPE tissue by LC‐MS/MS exist, but standardization of procedures and critical analysis of data quality is limited. This study compared and characterized data obtained from FFPE tissue using two methods: a urea in‐solution digestion method (UISD) versus a commercially available Qproteome FFPE Tissue Kit method (Qkit). Each method was performed independently three times on serial sections of homogenous FFPE tissue to minimize pre‐analytical variations and analyzed with three technical replicates by LC‐MS/MS. Data were evaluated for reproducibility and physiochemical distribution, which highlighted differences in the ability of each method to identify proteins of different molecular weights and isoelectric points. Each method replicate resulted in a significant number of new protein identifications, and both methods identified significantly more proteins using three technical replicates as compared to only two. UISD was cheaper, required less time, and introduced significant protein modifications as compared to the Qkit method, which provided more precise and higher protein yields. These data highlight significant variability among method replicates and type of method used, despite minimizing pre‐analytical variability. Utilization of only one method or too few replicates (both method and technical) may limit the subset of proteomic information obtained.  相似文献   

11.
Molecular techniques have become an important tool to empirically assess feeding interactions. The increased usage of next‐generation sequencing approaches has stressed the need of fast DNA extraction that does not compromise DNA quality. Dietary samples here pose a particular challenge, as these demand high‐quality DNA extraction procedures for obtaining the minute quantities of short‐fragmented food DNA. Automatic high‐throughput procedures significantly decrease time and costs and allow for standardization of extracting total DNA. However, these approaches have not yet been evaluated for dietary samples. We tested the efficiency of an automatic DNA extraction platform and a traditional CTAB protocol, employing a variety of dietary samples including invertebrate whole‐body extracts as well as invertebrate and vertebrate gut content samples and feces. Extraction efficacy was quantified using the proportions of successful PCR amplifications of both total and prey DNA, and cost was estimated in terms of time and material expense. For extraction of total DNA, the automated platform performed better for both invertebrate and vertebrate samples. This was also true for prey detection in vertebrate samples. For the dietary analysis in invertebrates, there is still room for improvement when using the high‐throughput system for optimal DNA yields. Overall, the automated DNA extraction system turned out as a promising alternative to labor‐intensive, low‐throughput manual extraction methods such as CTAB. It is opening up the opportunity for an extensive use of this cost‐efficient and innovative methodology at low contamination risk also in trophic ecology.  相似文献   

12.
For molecular insect identification, amplicon sequencing methods are recommended because they offer a cost‐effective approach for targeting small sets of informative genes from multiple samples. In this context, high‐throughput multilocus amplicon sequencing has been achieved using the MiSeq Illumina sequencing platform. However, this approach generates short gene fragments of <500 bp, which then have to be overlapped using bioinformatics to achieve longer sequence lengths. This increases the risk of generating chimeric sequences or leads to the formation of incomplete loci. Here, we propose a modified nested amplicon sequencing method for targeting multiple loci from pinned insect specimens using the MiSeq Illumina platform. The modification exists in using a three‐step nested PCR approach targeting near full‐length loci in the initial PCR and subsequently amplifying short fragments of between 300 and 350 bp for high‐throughput sequencing using Illumina chemistry. Using this method, we generated 407 sequences of three loci from 86% of all the specimens sequenced. Out of 103 pinned bee specimens of replicated species, 71% passed the 95% sequence similarity threshold between species replicates. This method worked best for pinned specimens aged between 0 and 5 years, with a limit of 10 years for pinned and 14 years for ethanol‐preserved specimens. Hence, our method overcomes some of the challenges of amplicon sequencing using short read next generation sequencing and improves the possibility of creating high‐quality multilocus barcodes from insect collections.  相似文献   

13.
Chen MH  Ibrahim JG  Lam P  Yu A  Zhang Y 《Biometrics》2011,67(3):1163-1170
Summary We develop a new Bayesian approach of sample size determination (SSD) for the design of noninferiority clinical trials. We extend the fitting and sampling priors of Wang and Gelfand (2002, Statistical Science 17 , 193–208) to Bayesian SSD with a focus on controlling the type I error and power. Historical data are incorporated via a hierarchical modeling approach as well as the power prior approach of Ibrahim and Chen (2000, Statistical Science 15 , 46–60). Various properties of the proposed Bayesian SSD methodology are examined and a simulation‐based computational algorithm is developed. The proposed methodology is applied to the design of a noninferiority medical device clinical trial with historical data from previous trials.  相似文献   

14.
During biomanufacturing cell lines development, the generation and screening for single‐cell derived subclones using methods that enable assurance of clonal derivation can be resource‐ and time‐intensive. High‐throughput miniaturization, automation, and analytic strategies are often employed to reduce such bottlenecks. The Beacon platform from Berkeley Lights offers a strategy to eliminate these limitations through culturing, manipulating, and characterizing cells on custom nanofluidic chips via software‐controlled operations. However, explicit demonstration of this technology to provide high assurance of a single cell progenitor has not been reported. Here, a methodology that utilizes the Beacon instrument to ensure high levels of clonality is described. It is demonstrated that the Beacon platform can efficiently generate production cell lines with a superior clonality data package, detailed tracking, and minimal resources. A stringent in‐process quality control strategy is established to enable rapid verification of clonal origin, and the workflow is validated using representative Chinese hamster ovary‐derived cell lines stably expressing either green or red fluorescence protein. Under these conditions, a >99% assurance of clonal origin is achieved, which is comparable to existing imaging‐coupled fluorescence‐activated cell sorting seeding methods.  相似文献   

15.
The main objective of this work was to develop and validate a robust and reliable “from‐benchtop‐to‐desktop” metabarcoding workflow to investigate the diet of invertebrate‐eaters. We applied our workflow to faecal DNA samples of an invertebrate‐eating fish species. A fragment of the cytochrome c oxidase I (COI) gene was amplified by combining two minibarcoding primer sets to maximize the taxonomic coverage. Amplicons were sequenced by an Illumina MiSeq platform. We developed a filtering approach based on a series of nonarbitrary thresholds established from control samples and from molecular replicates to address the elimination of cross‐contamination, PCR/sequencing errors and mistagging artefacts. This resulted in a conservative and informative metabarcoding data set. We developed a taxonomic assignment procedure that combines different approaches and that allowed the identification of ~75% of invertebrate COI variants to the species level. Moreover, based on the diversity of the variants, we introduced a semiquantitative statistic in our diet study, the minimum number of individuals, which is based on the number of distinct variants in each sample. The metabarcoding approach described in this article may guide future diet studies that aim to produce robust data sets associated with a fine and accurate identification of prey items.  相似文献   

16.
Many studies in molecular ecology rely upon the genotyping of large numbers of low‐quantity DNA extracts derived from noninvasive or museum specimens. To overcome low amplification success rates and avoid genotyping errors such as allelic dropout and false alleles, multiple polymerase chain reaction (PCR) replicates for each sample are typically used. Recently, two‐step multiplex procedures have been introduced which drastically increase the success rate and efficiency of genotyping. However, controversy still exists concerning the amount of replication needed for suitable control of error. Here we describe the use of a two‐step multiplex PCR procedure that allows rapid genotyping using at least 19 different microsatellite loci. We applied this approach to quantified amounts of noninvasive DNAs from western chimpanzee, western gorilla, mountain gorilla and black and white colobus faecal samples, as well as to DNA from ~100‐year‐old gorilla teeth from museums. Analysis of over 45 000 PCRs revealed average success rates of > 90% using faecal DNAs and 74% using museum specimen DNAs. Average allelic dropout rates were substantially reduced compared to those obtained using conventional singleplex PCR protocols, and reliable genotyping using low (< 25 pg) amounts of template DNA was possible. However, four to five replicates of apparently homozygous results are needed to avoid allelic dropout when using the lowest concentration DNAs (< 50 pg/reaction), suggesting that use of protocols allowing routine acceptance of homozygous genotypes after as few as three replicates may lead to unanticipated errors when applied to low‐concentration DNAs.  相似文献   

17.
In vitro models of the human colon have been used extensively in understanding the human gut microbiome (GM) and evaluating how internal and external factors affect the residing bacterial populations. Such models have been shown to be highly predictive of in vivo outcomes and have a number of advantages over animal models. The complexity required by in vitro models to closely mimic the physiology of the colon poses practical limits on their scalability. The scalable Mini Gut (MiGut) platform presented in this paper allows considerable expansion of model replicates and enables complex study design, without compromising on in vivo reflectiveness as is often the case with other model systems. MiGut has been benchmarked against a validated gut model in a demanding 9-week study. MiGut showed excellent repeatability between model replicates and results were consistent with those of the benchmark system. The novel technology presented in this paper makes it conceivable that tens of models could be run simultaneously, allowing complex microbiome-xenobiotic interactions to be explored in far greater detail, with minimal added resources or complexity. This platform expands the capacity to generate clinically relevant data to support our understanding of the cause-effect relationships that govern the GM.  相似文献   

18.
《Chirality》2017,29(11):684-707
S ‐(+)‐Methyl 2‐(2‐chlorophenyl)‐2‐(6,7‐dihydrothieno[3,2‐c ]pyridin‐5(4H )‐yl)acetate, also known as (S )‐clopidogrel, is marketed under the trade names Plavix and Iscover. It is a potent thienopyridine‐class of antithrombotic and antiplatelet drug (antiaggregant). Among the two available stereoisomers of clopidogrel, for pharmaceutical activities only the S ‐enantiomer is applicable, as no antithrombotic activity is observed in the R ‐enantiomer and causes political upheavals and social turmoil in animal experiments. Worldwide sales of Plavix amounted to $6.4 billion yearly, which ranks second. Attributed to the increased demand of (S )‐clopidogrel drug, it provoked the synthetic community to devise facile synthetic approaches. This review aims to summarize the synthetic methods of (S )‐clopidogrel drug reported in the literature. The present review discusses the pros and cons of each synthetic methodology, which would be beneficial to the scientific community for further developments in the synthetic methodologies for (S )‐clopidogrel. In addition, the compilation approach of literature‐reported synthetic strategies of (S )‐clopidogrel in one platform is advantageous, supportive, and crucial for the synthetic community to elect the best synthetic methodology of (S )‐clopidogrel and to create new synthesis ideas.  相似文献   

19.
Conventional fuel cells are based on rigid electrodes, limiting their applications in wearable and implantable electronics. Here, it is demonstrated that enokitake‐like vertically‐aligned standing gold nanowires (v‐AuNWs) can also serve as powerful platform for stretchable fuel cells by using ethanol as model system. Unlike traditional fuel cell electrodes, the v‐AuNWs have “Janus Morphology” on both sides of the film and also are highly stretchable. The comparative studies demonstrate that tail side exposed v‐AuNWs based stretchable electrodes outperform the head‐side exposed v‐AuNWs toward the electro‐oxidation of ethanol due to the direct exposure of high‐surface‐area nanowires to the fuels. Therefore, a stretchable fuel cell is fabricated utilizing tail side based interdigitated electrodes, where v‐AuNWs and Pt black modified v‐AuNWs serve as the anode and cathode, respectively. The as‐prepared stretchable fuel cell exhibits good overall performance, including high power density, current density, open‐circuit voltage, stretchability, and durability. Most importantly, a wearable fuel cell is also achieved by integrating tattoo‐like interdigitated electrodes with a thin layer of sponge as a fuel container, exhibiting good performance under various deformations (compression, stretching, and twisting). Such attractive performance in conjunction with skin‐like in‐plane design indicates its great potential to power the next‐generation of wearable and implantable devices.  相似文献   

20.
A critical component of an in vitro production process for baculovirus biopesticides is a growth medium that is efficacious, robust, and inexpensive. An in‐house low‐cost serum‐free medium, VPM3, has been shown to be very promising in supporting Helicoverpa armigera nucleopolyhedrovirus (HaSNPV) production in H. zea insect cell suspension cultures, for use as a biopesticide against the Heliothine pest complex. However, VPM3 is composed of a significant number of undefined components, including five different protein hydrolysates, which introduce a challenging lot‐to‐lot variability to the production process. In this study, an intensive statistical optimization routine was employed to reduce the number of protein hydrolysates in VPM3 medium. Nearly 300 runs (including replicates) were conducted with great efficiency by using 50 mL TubeSpin® bioreactors to propagate insect cell suspension cultures. Fractional factorial experiments were first used to determine the most important of the five default protein hydrolysates, and to screen for seven potential substitutes for the default meat peptone, Primatone RL. Validation studies informed by the screening tests showed that promising alternative media could be formulated based on just two protein hydrolysates, in particular the YST‐AMP (Yeast Extract and Amyl Meat Peptone) and YST‐POT (Yeast Extract and Lucratone Potato Peptone) combinations. The YST‐AMP (meat‐based) and YST‐POT (meat‐free) variants of VPM3 were optimized using response surface methodology, and were shown to be just as good as the default VPM3 and the commercial Sf‐900 II media in supporting baculovirus yields, hence providing a means toward a more reproducible and scalable production process for HaSNPV biopesticides. © 2012 American Institute of Chemical Engineers Biotechnol. Prog.,, 2012  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号