首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9篇
  免费   1篇
  2021年   1篇
  2015年   2篇
  2014年   2篇
  2011年   1篇
  2010年   1篇
  2007年   1篇
  2005年   1篇
  1999年   1篇
排序方式: 共有10条查询结果,搜索用时 15 毫秒
1
1.
2.
Ecological studies require key decisions regarding the appropriate size and number of sampling units. No methods currently exist to measure precision for multivariate assemblage data when dissimilarity‐based analyses are intended to follow. Here, we propose a pseudo multivariate dissimilarity‐based standard error (MultSE) as a useful quantity for assessing sample‐size adequacy in studies of ecological communities. Based on sums of squared dissimilarities, MultSE measures variability in the position of the centroid in the space of a chosen dissimilarity measure under repeated sampling for a given sample size. We describe a novel double resampling method to quantify uncertainty in MultSE values with increasing sample size. For more complex designs, values of MultSE can be calculated from the pseudo residual mean square of a permanova model, with the double resampling done within appropriate cells in the design. R code functions for implementing these techniques, along with ecological examples, are provided.  相似文献   
3.
Levin Y 《Proteomics》2011,11(12):2565-2567
Designing an experiment for quantitative proteomic analysis is not a trivial task. One of the key factors influencing the success of such studies is the number of biological replicates included in the analysis. This, along with the measured variation will determine the statistical power of the analysis. Presented is a simple yet powerful analysis to determine the appropriate sample size required for reliable and reproducible results, based on the total variation (technical and biological). This approach can also be applied retrospectively for the interpretation of results as it takes into account both significance (p value) and quantitative difference (fold change) of the results.  相似文献   
4.
Microarray analysis makes it possible to determine the relative expression of thousands of genes simultaneously. It has gained popularity at a rapid rate, but many caveats remain. In an effort to establish reliable microarray protocols for sweetpotato [Ipomoea batatas (L.) Lam.], we compared the effect of replication number and image analysis software with results obtained by quantitative rela-time PCR (Q-RT-PCR). Sweetpotato storage root development is the most economically important process in sweetpotato. In order to identify genes that may play a role in this process, RNA for microarray analysis was extracted from sweetpotato fibrous and storage roots. Four data sets, Spot4, Spot6, Finder4 and Finder6, were created using 4 or 6 replications, and the image analysis software of UCSF Spot or TIGR Spotfinder were used for spot detection and quantification. The ability of these methods to identify significant differential expression between treatments was investigated. The data sets with 6 replications were better at identifying genes with significant differential expression than the ones of 4 replications. Furthermore when using 6 replicates, UCSF Spot was superior to TIGR Spotfinder in identifying genes differentially expressed (18 out of 19) based on Q-RT-PCR. Our study shows the importance of proper replication number and image analysis for microarray studies.  相似文献   
5.
6.
7.
Restriction site‐associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single‐nucleotide polymorphisms. As an empirical example, we use a double‐digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high‐altitude mountains in Mexico.  相似文献   
8.
Environmental DNA (eDNA) metabarcoding is an increasingly popular tool for measuring and cataloguing biodiversity. Because the environments and substrates in which DNA is preserved differ considerably, eDNA research often requires bespoke approaches to generating eDNA data. Here, we explore how two experimental choices in eDNA study design—the number of PCR replicates and the depth of sequencing of PCR replicates—influence the composition and consistency of taxa recovered from eDNA extracts. We perform 24 PCR replicates from each of six soil samples using two of the most common metabarcodes for Fungi and Viridiplantae (ITS1 and ITS2), and sequence each replicate to an average depth of ~84,000 reads. We find that PCR replicates are broadly consistent in composition and relative abundance of dominant taxa, but that low abundance taxa are often unique to one or a few PCR replicates. Taxa observed in one out of 24 PCR replicates make up 21–29% of the total taxa detected. We also observe that sequencing depth or rarefaction influences alpha diversity and beta diversity estimates. Read sampling depth influences local contribution to beta diversity, placement in ordinations, and beta dispersion in ordinations. Our results suggest that, because common taxa drive some alpha diversity estimates, few PCR replicates and low read sampling depths may be sufficient for many biological applications of eDNA metabarcoding. However, because rare taxa are recovered stochastically, eDNA metabarcoding may never fully recover the true amplifiable alpha diversity in an eDNA extract. Rare taxa drive PCR replicate outliers of alpha and beta diversity and lead to dispersion differences at different read sampling depths. We conclude that researchers should consider the complexity and unevenness of a community when choosing analytical approaches, read sampling depths, and filtering thresholds to arrive at stable estimates.  相似文献   
9.
There are many sources of error on the path from field sample acquisition to subsample analysis. This paper examines one potential source, the subsampling of a processed field sample. Five archived ground field samples were subsampled to determine the optimal number of increments to construct a 10-g subsample. Bulk samples ranged from 338 g to 2150 g. The analytes were energetic compounds: crystalline, easy-to-grind explosives and difficult-to-grind propellants in a nitrocellulose matrix. A two-phase study was conducted with moderately high concentration samples and low concentration samples of each type of analyte. All samples were ground with a puck mill according to EPA method 8330B and analyzed on liquid chromatography instrumentation. Up to 40 increments were used to build each subsample and seven replicates executed for each test. Results demonstrate that for a well-ground and mixed sample, a single 10 g subsample is sufficient. For triplicate subsamples, however, 20 to 40 increments will give a result much closer to the concentration of the bulk sample. To minimize overall error due to incomplete mixing, improper grinding, or very low concentrations, we recommend about 30 increments be taken over the complete sample to construct the subsample.  相似文献   
10.
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号