首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Conservation and population genetic studies are sometimes hampered by insufficient quantities of high quality DNA. One potential way to overcome this problem is through the use of whole genome amplification (WGA) kits. We performed rolling circle WGA on DNA obtained from matched hair and tissue samples of North American red squirrels (Tamiasciurus hudsonicus). Following polymerase chain reaction (PCR) at four microsatellite loci, we compared genotyping success for DNA from different source tissues, both pre‐ and post‐WGA. Genotypes obtained with tissue were robust, whether or not DNA had been subjected to WGA. DNA extracted from hair produced results that were largely concordant with matched tissue samples, although amplification success was reduced and some allelic dropout was observed. WGA of hair samples resulted in a low genotyping success rate and an unacceptably high rate of allelic dropout and genotyping error. The problem was not rectified by conducting PCR of WGA hair samples in triplicate. Therefore, we conclude that WGA is only an effective method of enhancing template DNA quantity when the initial sample is from high‐yield material.  相似文献   

2.
Summary .  Sampling DNA noninvasively has advantages for identifying animals for uses such as mark–recapture modeling that require unique identification of animals in samples. Although it is possible to generate large amounts of data from noninvasive sources of DNA, a challenge is overcoming genotyping errors that can lead to incorrect identification of individuals. A major source of error is allelic dropout, which is failure of DNA amplification at one or more loci. This has the effect of heterozygous individuals being scored as homozygotes at those loci as only one allele is detected. If errors go undetected and the genotypes are naively used in mark–recapture models, significant overestimates of population size can occur. To avoid this it is common to reject low-quality samples but this may lead to the elimination of large amounts of data. It is preferable to retain these low-quality samples as they still contain usable information in the form of partial genotypes. Rather than trying to minimize error or discarding error-prone samples we model dropout in our analysis. We describe a method based on data augmentation that allows us to model data from samples that include uncertain genotypes. Application is illustrated using data from the European badger ( Meles meles ).  相似文献   

3.
4.
Improvements in the determination of individual genotypes from samples with low DNA quantity and quality are of prime importance in molecular ecology and conservation for reliable genetic individual identification (molecular tagging using microsatllites loci). Thus, errors (e.g. allelic dropout and false allele) appearing during samples genotyping must be monitored and eliminated as far as possible. The multitubes approach is a very effective but a costly and time‐consuming solution. In this paper, we present a simulation software that allows evaluation of the effect of genotyping errors on genetic identification of individuals and the effectiveness of a multitubes approach to correct these errors.  相似文献   

5.
Objective: Identifying client factors that predict dropout is critical for the development of effective weight‐loss programs. Although demographic predictors are studied, there are few consistent findings. The purpose of this study was to identify predictors of dropout in a large clinic‐based weight‐loss program using readily attainable demographic variables. Research Methods and Procedures: All 866 weight‐loss patients in a clinic‐based weight‐loss program enrolled during 1998 to 1999 were followed. Attrition and retention rates were measured at 8 and 16 weeks. Six variables (sex, race, marital status, age, BMI, and treatment protocol) were evaluated using bivariate and multivariable statistics for relative association with dropout. Results: The overall attrition rate for the 16‐week program was 31%. The retention rate was 69%. Significant risk for dropout, measured as bivariate relative risk (95% confidence interval), was found among patients who were: females, 1.32 (1.01 to 1.73); divorced, 1.54 (1.13 to 2.09); African Americans, 1.68 (1.26 to 2.23); age < 40, 1.66 (1.27 to 2.18); and ages 40 to 50, 1.33 (1.01 to 1.76). There were no significant differences in retention rates by BMI group or program protocol. After logistic regression analysis to control for all variables, young age < 50 years had the only significant association with dropout [odds ratio = 1.39 (1.02 to 1.90)]. Discussion: Multivariable modeling was helpful for prioritizing risk factors for program dropout. These findings have important implications for improving weight‐loss program effectiveness and reducing attrition. By knowing the groups at risk for dropout, we can improve or target program treatments to these populations.  相似文献   

6.
Non-invasive DNA genotyping using hair samples has become a common method in population surveys of Asiatic black bears (Ursus thibetanus) in Japan; however, the accuracy of the genotyping data has rarely been discussed in empirical studies. Therefore, we conducted a large-scale pilot study to examine genotyping accuracy and sought an efficient way of error-checking hair-trapping data. We collected 2,067 hair samples, successfully determined the genotypes of 1,245 samples, and identified 295 individuals. The genotyping data were further divided into 3 subsets of data according to the number of hairs used for DNA extraction in each sample (1–4, 5–9, and ≥10 hairs), and the error rates of allelic dropout and false alleles were estimated for each subset using a maximum likelihood method. The genotyping error rates in the samples with ≥10 hairs were found to be lower than those in the samples with 1–4 and 5–9 hairs. The presence of erroneous genotypes among the identified individuals was further checked using a post hoc goodness-of-fit test that determined the match between the expected and observed frequencies of individual homozygotes at 0–6 loci. The results indicated the presence of erroneous genotypes, possibly as a result of allelic dropout, in the samples. Therefore, for improved accuracy, it is recommended that samples containing ≥10 hairs should be used for genotyping and a post hoc goodness-of-fit test should be performed to exclude erroneous genotypes before proceeding with downstream analysis such as capture-mark-recapture estimation.  相似文献   

7.
SNP arrays are widely used in genetic research and agricultural genomics applications, and the quality of SNP genotyping data is of paramount importance. In the present study, SNP genotyping concordance and discordance were evaluated for commercial bovine SNP arrays based on two types of quality assurance (QA) samples provided by Neogen GeneSeek. The genotyping discordance rates (GDRs) between chips were on average between 0.06% and 0.37% based on the QA type I data and between 0.05% and 0.15% based on the QA type II data. The average genotyping error rate (GER) pertaining to single SNP chips, based on the QA type II data, varied between 0.02% and 0.08% per SNP and between 0.01% and 0.06% per sample. These results indicate that genotyping concordance rate was high (i.e. from 99.63% to 99.99%). Nevertheless, mitochondrial and Y chromosome SNPs had considerably elevated GDRs and GERs compared to the SNPs on the 29 autosomes and X chromosome. The majority of genotyping errors resulted from single allotyping errors, which also included the opposite instances for allele ‘dropout’ (i.e. from AB to AA or BB). Simultaneous allotyping errors on both alleles (e.g. mistaking AA for BB or vice versa) were relatively rare. Finally, a list of SNPs with a GER greater than 1% is provided. Interpretation of association effects of these SNPs, for example in genome‐wide association studies, needs to be taken with caution. The genotyping concordance information needs to be considered in the optimal design of future bovine SNP arrays.  相似文献   

8.
Many studies in molecular ecology rely upon the genotyping of large numbers of low‐quantity DNA extracts derived from noninvasive or museum specimens. To overcome low amplification success rates and avoid genotyping errors such as allelic dropout and false alleles, multiple polymerase chain reaction (PCR) replicates for each sample are typically used. Recently, two‐step multiplex procedures have been introduced which drastically increase the success rate and efficiency of genotyping. However, controversy still exists concerning the amount of replication needed for suitable control of error. Here we describe the use of a two‐step multiplex PCR procedure that allows rapid genotyping using at least 19 different microsatellite loci. We applied this approach to quantified amounts of noninvasive DNAs from western chimpanzee, western gorilla, mountain gorilla and black and white colobus faecal samples, as well as to DNA from ~100‐year‐old gorilla teeth from museums. Analysis of over 45 000 PCRs revealed average success rates of > 90% using faecal DNAs and 74% using museum specimen DNAs. Average allelic dropout rates were substantially reduced compared to those obtained using conventional singleplex PCR protocols, and reliable genotyping using low (< 25 pg) amounts of template DNA was possible. However, four to five replicates of apparently homozygous results are needed to avoid allelic dropout when using the lowest concentration DNAs (< 50 pg/reaction), suggesting that use of protocols allowing routine acceptance of homozygous genotypes after as few as three replicates may lead to unanticipated errors when applied to low‐concentration DNAs.  相似文献   

9.
In the context of a study of wild chimpanzees, Pan troglodytes verus, we found that genotypes based on single PCR amplifications of microsatellite loci from single shed hair have a high error rate. We quantified error rates using the comparable results of 791 single shed hair PCR amplifications of 11 microsatellite loci of 18 known individuals. The most frequent error was the amplification of only one of the two alleles present at a heterozygous locus. This phenomenon, called allelic dropout, produced false homozygotes in 31% of single-hair amplifications. There was no difference in the probability of preferential amplification between longer and shorter alleles. The probability of scoring false homozygotes can be reduced to below 0.05 by three separate amplifications from single hairs of the same individual or by pooling hair samples from the same individual. In this study an additional 5.6% of the amplifications gave wrong genotypes because of contamination, labelling and loading errors, and possibly amplification artefacts. In contrast, amplifications from plucked hair taken from four dead individuals gave consistent results (error rate < 0.01%, n= 120). Allelic dropout becomes a problem when the DNA concentration falls below 0.05 ng/10 μL in the template as it can with shed hair, and extracts from faeces and masticated plant matter.  相似文献   

10.
Double‐digested RADseq (ddRADseq) is a NGS methodology that generates reads from thousands of loci targeted by restriction enzyme cut sites, across multiple individuals. To be statistically sound and economically optimal, a ddRADseq experiment has a preliminary design stage that needs to consider issues related to the selection of enzymes, particular features of the genome of the focal species, possible modifications to the library construction protocol, coverage needed to minimize missing data, and the potential sources of error that may impact upon the coverage. We present ddradseqtools , a software package to help ddRADseq experimental design by (i) the generation of in silico double‐digested fragments; (ii) the construction of modified ddRADseq libraries using adapters with either one or two indexes and degenerate base regions (DBRs) to quantify PCR duplicates; and (iii) the initial steps of the bioinformatics preprocessing of reads. ddradseqtools generates single‐end (SE) or paired‐end (PE) reads that may bear SNPs and/or indels. The effect of allele dropout and PCR duplicates on coverage is also simulated. The resulting output files can be submitted to pipelines of alignment and variant calling, to allow the fine‐tuning of parameters. The software was validated with specific tests for the correct operability of the program. The correspondence between in silico settings and parameters from ddRADseq in vitro experiments was assessed to provide guidelines for the reliable performance of the software. ddradseqtools is cost‐efficient in terms of execution time, and can be run on computers with standard CPU and RAM configuration.  相似文献   

11.
Summary Ye, Lin, and Taylor (2008, Biometrics 64 , 1238–1246) proposed a joint model for longitudinal measurements and time‐to‐event data in which the longitudinal measurements are modeled with a semiparametric mixed model to allow for the complex patterns in longitudinal biomarker data. They proposed a two‐stage regression calibration approach that is simpler to implement than a joint modeling approach. In the first stage of their approach, the mixed model is fit without regard to the time‐to‐event data. In the second stage, the posterior expectation of an individual's random effects from the mixed‐model are included as covariates in a Cox model. Although Ye et al. (2008) acknowledged that their regression calibration approach may cause a bias due to the problem of informative dropout and measurement error, they argued that the bias is small relative to alternative methods. In this article, we show that this bias may be substantial. We show how to alleviate much of this bias with an alternative regression calibration approach that can be applied for both discrete and continuous time‐to‐event data. Through simulations, the proposed approach is shown to have substantially less bias than the regression calibration approach proposed by Ye et al. (2008) . In agreement with the methodology proposed by Ye et al. (2008) , an advantage of our proposed approach over joint modeling is that it can be implemented with standard statistical software and does not require complex estimation techniques.  相似文献   

12.
Population size information is critical for managing endangered or harvested populations. Population size can now be estimated from non-invasive genetic sampling. However, pitfalls remain such as genotyping errors (allele dropout and false alleles at microsatellite loci). To evaluate the feasibility of non-invasive sampling (e.g., for population size estimation), a pilot study is required. Here, we present a pilot study consisting of (i) a genetic step to test loci amplification and to estimate allele frequencies and genotyping error rates when using faecal DNA, and (ii) a simulation step to quantify and minimise the effects of errors on estimates of population size. The pilot study was conducted on a population of red deer in a fenced natural area of 5440 ha, in France. Twelve microsatellite loci were tested for amplification and genotyping errors. The genotyping error rates for microsatellite loci were 0–0.83 (mean=0.2) for allele dropout rates and 0–0.14 (mean=0.02) for false allele rates, comparable to rates encountered in other non-invasive studies. Simulation results suggest we must conduct 6 PCR amplifications per sample (per locus) to achieve approximately 97% correct genotypes. The 3% error rate appears to have little influence on the accuracy and precision of population size estimation. This paper illustrates the importance of conducting a pilot study (including genotyping and simulations) when using non-invasive sampling to study threatened or managed populations.  相似文献   

13.
Historical and other poor‐quality samples are often necessary for population genetics, conservation, and forensics studies. Although there is a long history of using mtDNA from such samples, obtaining and genotyping nuclear loci have been considered difficult and error‐prone at best, and impossible at worst. The primary issues are the amount of nuclear DNA available for genotyping, and the degradation of the DNA into small fragments. Single nucleotide polymorphisms offer potential advantages for assaying nuclear variation in historical and poor‐quality samples, because the amplified fragments can be very small, varying little or not at all in size between alleles, and can be amplified efficiently by polymerase chain reaction (PCR). We present a method for highly multiplexed PCR of SNP loci, followed by dual‐fluorescence genotyping that is very effective for genotyping poor‐quality samples, and can potentially use very little template DNA, regardless of the number of loci to be genotyped. We genotyped 19 SNP loci from DNA extracted from modern and historical bowhead whale tissue, bone and baleen samples. The PCR failure rate was < 1.5%, and the genotyping error rate was 0.1% when DNA samples contained > 10 copies/µL of a 51‐bp nuclear sequence. Among samples with ≤ 10 copies/µL DNA, samples could still be genotyped confidently with appropriate levels of replication from independent multiplex PCRs.  相似文献   

14.
Biodiversity has suffered a dramatic global decline during the past decades, and monitoring tools are urgently needed providing data for the development and evaluation of conservation efforts both on a species and on a genetic level. However, in wild species, the assessment of genetic diversity is often hampered by the lack of suitable genetic markers. In this article, we present Random Amplicon Sequencing (RAMseq), a novel approach for fast and cost‐effective detection of single nucleotide polymorphisms (SNPs) in nonmodel species by semideep sequencing of random amplicons. By applying RAMseq to the Eurasian otter (Lutra lutra), we identified 238 putative SNPs after quality filtering of all candidate loci and were able to validate 32 of 77 loci tested. In a second step, we evaluated the genotyping performance of these SNP loci in noninvasive samples, one of the most challenging genotyping applications, by comparing it with genotyping results of the same faecal samples at microsatellite markers. We compared (i) polymerase chain reaction (PCR) success rate, (ii) genotyping errors and (iii) Mendelian inheritance (population parameters). SNPs produced a significantly higher PCR success rate (75.5% vs. 65.1%) and lower mean allelic error rate (8.8% vs. 13.3%) than microsatellites, but showed a higher allelic dropout rate (29.7% vs. 19.8%). Genotyping results showed no deviations from Mendelian inheritance in any of the SNP loci. Hence, RAMseq appears to be a valuable tool for the detection of genetic markers in nonmodel species, which is a common challenge in conservation genetic studies.  相似文献   

15.
There has been remarkably little attention to using the high resolution provided by genotyping‐by‐sequencing (i.e., RADseq and similar methods) for assessing relatedness in wildlife populations. A major hurdle is the genotyping error, especially allelic dropout, often found in this type of data that could lead to downward‐biased, yet precise, estimates of relatedness. Here, we assess the applicability of genotyping‐by‐sequencing for relatedness inferences given its relatively high genotyping error rate. Individuals of known relatedness were simulated under genotyping error, allelic dropout and missing data scenarios based on an empirical ddRAD data set, and their true relatedness was compared to that estimated by seven relatedness estimators. We found that an estimator chosen through such analyses can circumvent the influence of genotyping error, with the estimator of Ritland (Genetics Research, 67, 175) shown to be unaffected by allelic dropout and to be the most accurate when there is genotyping error. We also found that the choice of estimator should not rely solely on the strength of correlation between estimated and true relatedness as a strong correlation does not necessarily mean estimates are close to true relatedness. We also demonstrated how even a large SNP data set with genotyping error (allelic dropout or otherwise) or missing data still performs better than a perfectly genotyped microsatellite data set of tens of markers. The simulation‐based approach used here can be easily implemented by others on their own genotyping‐by‐sequencing data sets to confirm the most appropriate and powerful estimator for their data.  相似文献   

16.
Genotypes produced from samples collected non-invasively in harsh field conditions often lack the full complement of data from the selected microsatellite loci. The application to genetic mark-recapture methodology in wildlife species can therefore be prone to misidentifications leading to both ‘true non-recaptures’ being falsely accepted as recaptures (Type I errors) and ‘true recaptures’ being undetected (Type II errors). Here we present a new likelihood method that allows every pairwise genotype comparison to be evaluated independently. We apply this method to determine the total number of recaptures by estimating and optimising the balance between Type I errors and Type II errors. We show through simulation that the standard error of recapture estimates can be minimised through our algorithms. Interestingly, the precision of our recapture estimates actually improved when we included individuals with missing genotypes, as this increased the number of pairwise comparisons potentially uncovering more recaptures. Simulations suggest that the method is tolerant to per locus error rates of up to 5% per locus and can theoretically work in datasets with as little as 60% of loci genotyped. Our methods can be implemented in datasets where standard mismatch analyses fail to distinguish recaptures. Finally, we show that by assigning a low Type I error rate to our matching algorithms we can generate a dataset of individuals of known capture histories that is suitable for the downstream analysis with traditional mark-recapture methods.  相似文献   

17.
Recent advances in high‐throughput sequencing library preparation and subgenomic enrichment methods have opened new avenues for population genetics and phylogenetics of nonmodel organisms. To multiplex large numbers of indexed samples while sequencing predominantly orthologous, targeted regions of the genome, we propose modifications to an existing, in‐solution capture that utilizes PCR products as target probes to enrich library pools for the genomic subset of interest. The sequence capture using PCR‐generated probes (SCPP) protocol requires no specialized equipment, is highly flexible and significantly reduces experimental costs for projects where a modest scale of genetic data is optimal (25–100 genomic loci). Our alterations enable application of this method across a wider phylogenetic range of taxa and result in higher capture efficiencies and coverage at each locus. Efficient and consistent capture over multiple SCPP experiments and at various phylogenetic distances is demonstrated, extending the utility of this method to both phylogeographic and phylogenomic studies.  相似文献   

18.
Allelic dropouts are an important source of genotyping error, particularly in studies using non-invasive sampling techniques. This has important implications for conservation biology, as an increasing number of studies are now using non-invasive techniques to study rare species or endangered populations. Previously, allelic dropout has typically been associated with PCR amplification of low quality/quantity template DNA. However, in this study we recorded high levels of allelic dropout (21–57%) at specific loci amplified from a high quality DNA (63.1 ± 7.8 ng/μl) source in the red fox (Vulpes vulpes). We designed a series of experiments to identify the sources of error. Whilst we were able to show that the best method to identify allelic dropout was the dilution of template DNA prior to PCR amplification, our data also showed two specific patterns: (1) allelic dropouts occurred at specific loci; (2) allelic dropouts occurred at specific pair-wise combinations of alleles. These patterns suggest that mechanisms other than low quantity template DNA are responsible for allelic dropout. Further research on the causes of these patterns in this and other studies would further our understanding of genotyping errors and would aid future studies where allelic dropout may be a serious issue.  相似文献   

19.
N. Izadi‐Mood, S. Sarmadi and S. Sanii
Quality control in cervicovaginal cytology by cytohistological correlation Objective: Frequent studies attest to the correlation of cytological interpretations with defined histopathological entities. Nevertheless, as part of quality control, cytology laboratories are required to compare Papanicolaou smear reports with those of cervical biopsies to search for discrepancies. We have attempted to determine and categorize the causes of existing discrepancies in our laboratory in order to clarify the source of errors. Methods: We reviewed 670 cervical smears that were paired with subsequent punch biopsy or endocervical curettage samples, obtained within 2 months of the cytology, and found out that 60 smear‐biopsy pairs were discrepant regarding the diagnosis. These cases were categorized into four error groups after careful re‐evaluation of the original smear and biopsy slides. Results: In 51 (85%) of 60 cervical smear‐biopsy pairs with reports that disagreed, the initial diagnoses of both cervical smear and biopsy were confirmed by the review opinion; in these cases, cytology and biopsy ‘sampling errors’ were responsible for 40 and 11 instances of discrepancy, respectively. Seven cases (11.1%) were discrepant due to ‘smear interpretation errors’ and consisted of five cases with initial under‐diagnosis and two cases with initial over‐diagnosis. One case (1.7%) was due to ‘screener error’. In another case, discordance was due to cervical ‘biopsy interpretation error’, with initial over‐diagnosis as squamous intraepithelial lesion. Conclusion: In this retrospective study, we determined the causes of cytohistological discrepancies in cervical samples. The main explanation for discrepancy was ‘sampling error’.  相似文献   

20.
Farmland birds are in steep decline and agri‐environment schemes (AES) to counteract these biodiversity losses are expensive and inefficient. Here we test a novel AES, ‘Birdfields’, designed using detailed ecological knowledge of the target species, Montagu's Harrier Circus pygargus. Current AES, such as field margins, that aim to improve foraging conditions (i.e. vole densities) for harriers are inefficient, as prey are difficult to capture in tall set‐aside habitat. ‘Birdfields’ combines strips of set‐aside to boost vole numbers and strips of alfalfa, as voles are accessible after alfalfa has been harvested. We found that vole numbers were generally highest in set‐aside. Montagu's Harriers fitted with GPS‐loggers used ‘Birdfields’ intensively after mowing, preferring mown to unmown strips. Thus, prey availability appeared more important than prey abundance. Thus, ‘Birdfields’, as a targeted AES for Montagu's Harriers, is more effective than previous AES due to increased prey accessibility. An additional advantage of ‘Birdfields’ is that it is considerably cheaper, due to the harvest of alfalfa. We advocate that AES should always include monitoring and research activities, aiming at a more adaptive conservation approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号