首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two Aeromonas salmonicida-specific polymerase chain reaction (PCR) tests and 1 A. salmonicida subsp. salmonicida-specific PCR test were used to screen salmonid populations that were either overtly or covertly infected with A. salmonicida subsp. salmonicida. It was demonstrated that these PCR assays could be used to replace the biochemical testing currently employed to confirm the identity of A. salmonicida isolates cultured from infected fish. The AP and PAAS PCR assays were also capable of direct detection of A. salmonicida in overtly infected fish, with mucus, gill and kidney samples most likely to yield a positive result. Culture was a more reliable method for the direct detection of A. salmonicida in covertly infected salmonids than was the direct PCR testing of tissue samples, with the AP and PAAS PCRs having a lower detection limit (LDL) of approximately 4 x 10(5) colony-forming units (CFU) g(-1) sample.  相似文献   

2.
Information on statistical power is critical when planning investigations and evaluating empirical data, but actual power estimates are rarely presented in population genetic studies. We used computer simulations to assess and evaluate power when testing for genetic differentiation at multiple loci through combining test statistics or P values obtained by four different statistical approaches, viz. Pearson's chi-square, the log-likelihood ratio G-test, Fisher's exact test, and an F(ST)-based permutation test. Factors considered in the comparisons include the number of samples, their size, and the number and type of genetic marker loci. It is shown that power for detecting divergence may be substantial for frequently used sample sizes and sets of markers, also at quite low levels of differentiation. The choice of statistical method may be critical, though. For multi-allelic loci such as microsatellites, combining exact P values using Fisher's method is robust and generally provides a high resolving power. In contrast, for few-allele loci (e.g. allozymes and single nucleotide polymorphisms) and when making pairwise sample comparisons, this approach may yield a remarkably low power. In such situations chi-square typically represents a better alternative. The G-test without Williams's correction frequently tends to provide an unduly high proportion of false significances, and results from this test should be interpreted with great care. Our results are not confined to population genetic analyses but applicable to contingency testing in general.  相似文献   

3.
The comparison of parasite numbers or intensities between different samples of hosts is a common and important question in most parasitological studies. The main question is whether the values in one sample tend to be higher (or lower) than the values of the other sample. We argue that it is more appropriate to test a null hypothesis about the probability that an individual host from one sample has a higher value than individual hosts from a second sample rather than testing hypotheses about means or medians. We present a recently proposed statistical test especially designed to test hypotheses about that probability. This novel test is more appropriate than other statistical tests, such as Student's t-test, the Mann-Whitney U-test, or a bootstrap test based on Welch's t-statistic, regularly used by parasitologists.  相似文献   

4.
When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.  相似文献   

5.
Microarray technology is rapidly emerging for genome-wide screening of differentially expressed genes between clinical subtypes or different conditions of human diseases. Traditional statistical testing approaches, such as the two-sample t-test or Wilcoxon test, are frequently used for evaluating statistical significance of informative expressions but require adjustment for large-scale multiplicity. Due to its simplicity, Bonferroni adjustment has been widely used to circumvent this problem. It is well known, however, that the standard Bonferroni test is often very conservative. In the present paper, we compare three multiple testing procedures in the microarray context: the original Bonferroni method, a Bonferroni-type improved single-step method and a step-down method. The latter two methods are based on nonparametric resampling, by which the null distribution can be derived with the dependency structure among gene expressions preserved and the family-wise error rate accurately controlled at the desired level. We also present a sample size calculation method for designing microarray studies. Through simulations and data analyses, we find that the proposed methods for testing and sample size calculation are computationally fast and control error and power precisely.  相似文献   

6.
A simple cryogenic holder for tensile testing of soft biological tissues   总被引:2,自引:0,他引:2  
To overcome the difficulty of gripping soft biological materials for tensile test, a simple inexpensive cryogenic holder was developed which allows rapid (3 min) preparation of samples. It is made of 6 parts, built in a bakelite cloth, which is an excellent thermal isolant, and is used with rectangular (8x10(-2)x10(-2)x10(-2)m) samples. The holder with the sample inside is completely immersed in liquid nitrogen for 50 s. This duration allows the freezing of the sample ends on a 10(-2)m length and gives a very flat freezing surface throughout the sample cross section. The 6x10(-2)m central part of the sample remained at ambient temperature. Two parts of the holder help maintain the sample until its ends are vertically gripped in the tensile machine thus avoiding any sample deformation during this step. No pressure was applied on the frozen part of the sample by grips of the tensile machine and this avoids breaks in this region. The sample is fixed by adhesion forces (>1 kN) between its frozen parts and 2 pieces of the holder. The procedure has been successfully tested with bovine and salmon muscle samples and results show tensile breaks randomly distributed in the unfrozen region of the samples. Particular attention has been paid to obtain a very flat freezing surface so that the axial strain is equal throughout the sample and therefore any strain-related mechanical parameters can be accurately determined. The dimensions of the holder can be easily modified to fit other sample geometries and can be used with other biological materials.  相似文献   

7.
Ryman N  Jorde PE 《Molecular ecology》2001,10(10):2361-2373
A variety of statistical procedures are commonly employed when testing for genetic differentiation. In a typical situation two or more samples of individuals have been genotyped at several gene loci by molecular or biochemical means, and in a first step a statistical test for allele frequency homogeneity is performed at each locus separately, using, e.g. the contingency chi-square test, Fisher's exact test, or some modification thereof. In a second step the results from the separate tests are combined for evaluation of the joint null hypothesis that there is no allele frequency difference at any locus, corresponding to the important case where the samples would be regarded as drawn from the same statistical and, hence, biological population. Presently, there are two conceptually different strategies in use for testing the joint null hypothesis of no difference at any locus. One approach is based on the summation of chi-square statistics over loci. Another method is employed by investigators applying the Bonferroni technique (adjusting the P-value required for rejection to account for the elevated alpha errors when performing multiple tests simultaneously) to test if the heterogeneity observed at any particular locus can be regarded significant when considered separately. Under this approach the joint null hypothesis is rejected if one or more of the component single locus tests is considered significant under the Bonferroni criterion. We used computer simulations to evaluate the statistical power and realized alpha errors of these strategies when evaluating the joint hypothesis after scoring multiple loci. We find that the 'extended' Bonferroni approach generally is associated with low statistical power and should not be applied in the current setting. Further, and contrary to what might be expected, we find that 'exact' tests typically behave poorly when combined in existing procedures for joint hypothesis testing. Thus, while exact tests are generally to be preferred over approximate ones when testing each particular locus, approximate tests such as the traditional chi-square seem preferable when addressing the joint hypothesis.  相似文献   

8.
A method of analysis for comparing the variability of two samples drawn from two populations has been developed. The method is also suitable for the nonnumeric form of data. A test based on ordered observations for testing the null hypothesis of equality of two variances has been given. The test statistic is a function of the sum of ranks assigned to smaller size sample. Ranking procedure has been modified to depict the variability in the data by the sum of ranks. The null distribution of the test-statistic has been worked out for small samples and it turns out to be chi-square distribution for large samples. The analytical procedure has been explained by a numerical example on the productivity and production of rice and wheat in India from 1950–51 to 1983–84.  相似文献   

9.
Group testing, also known as pooled sample testing, was first proposed by Robert Dorfman in 1943. While sample pooling has been widely practiced in blood-banking, it is traditionally seen as anathema for clinical laboratories. However, the ongoing COVID-19 pandemic has re-ignited interest for group testing among clinical laboratories to mitigate supply shortages. We propose five criteria to assess the suitability of an analyte for pooled sample testing in general and outline a practical approach that a clinical laboratory may use to implement pooled testing for SARS-CoV-2 PCR testing. The five criteria we propose are: (1) the analyte concentrations in the diseased persons should be at least one order of magnitude (10 times) higher than in healthy persons; (2) sample dilution should not overly reduce clinical sensitivity; (3) the current prevalence must be sufficiently low for the number of samples pooled for the specific protocol; (4) there is no requirement for a fast turnaround time; and (5) there is an imperative need for resource rationing to maximise public health outcomes. The five key steps we suggest for a successful implementation are: (1) determination of when pooling takes place (pre-pre analytical, pre-analytical, analytical); (2) validation of the pooling protocol; (3) ensuring an adequate infrastructure and archival system; (4) configuration of the laboratory information system; and (5) staff training. While pool testing is not a panacea to overcome reagent shortage, it may allow broader access to testing but at the cost of reduction in sensitivity and increased turnaround time.  相似文献   

10.
The binomial approximation of the UMPU (uniformly most powerful unbiased) test for the equality of 2 binomial proportions is shown to be a highly accurate and easily applied method for testing the hypothesis that a given mouse specific-locus mutation frequency is not higher than the spontaneous mutation frequency (43 mutations in 801 406 offspring, for males). Critical sample sizes have been calculated that show at a glance whether P < 0.05.The first hypothesis that the mutation frequency (induced + spontaneous) of treated mice is not higher than the spontaneous mutation frequency is combined with the second hypothesis that the induced mutation frequency of treated mice is no less than 4 times the historical-control mutation frequency to produce a multiple decision procedure with 4 possible decisions: inconclusive result, negative result, positive result, and weak mutagen. Critical sample sizes for the second hypothesis, also with P < 0.05, are combined with those for the first hypothesis into a grid that permits rapid evaluation of data according to these criterea. The justification for using these criteria in reaching decisions, assuming a high level of exposure has been given, is the practical necessity of rapidly determining which chemicals are potent mutagens.Positive results can become apparent in relatively small samples. Larger samples, of at least 11 166 offspring, are required to obtain a negative result. If samples of 18 000 are routinely collected (unless positive results are found earlier), 75% of tests of chemicals that are non-mutagens will give a negative result. If the question being asked is not whether a chemical induces gene mutations but, rather, whether the exposure received by humans causes any important risk from gene mutations, a much smaller sample size may be acceptable, under certain conditions.A comparison of the relative efficiencies of the specific-locus test (for gene mutations and small deficiencies) and the heritable-translocation test (for transmissible chromosome rearrangements), in detecting the same proportional increases over the spontaneous frequencies of their respective types of genetic damage, shows that less work is involved in reaching a conclusive result in the specific-locus test. Proposed specific-locus tests using biochemical markers are at a considerable statistical disadvantage compared with the standard test (using 7 visible markers) for which there is available a very large historical control showing a very low mutation rate.  相似文献   

11.
AIM: To develop and validate high throughput methods for the direct enumeration of viable and culturable Salmonella and Escherichia coli O157:H7 in ground beef, carcass, hide and faecal (GCHF) samples from cattle. METHODS AND RESULTS: The hydrophobic grid membrane filtration (HGMF) method and the spiral plate count method (SPCM) were evaluated as rapid tools for the estimation of pathogen load using GCHF samples spiked with known levels of Salmonella serotype Typhimurium. Validation studies showed that for a single determination of each sample type the low end of the detection limits were approx. 2.0 x 10(0) CFU g(-1) for ground beef, 5.0 x 10(-1) CFU (100 cm(2))(-1) for Salmonella and 8.0 x 10(-1) CFU (100 cm(2))(-1) for E. coli O157:H7 on carcasses, 4.0 x 10(1) CFU (100 cm(2))(-1) for hide and 2.0 x 10(2) CFU g(-1) for faecal samples. In addition, ground beef (n = 609), carcass (n = 1520) and hide (n = 3038) samples were collected from beef-processing plants and faecal samples (n = 3190) were collected from feed-lot cattle, and these samples were tested for the presence of Salmonella and E. coli O157:H7 by enrichment and enumeration methods. CONCLUSIONS: The direct enumeration methods described here are amenable to high throughput sample processing and were found to be cost-effective alternatives to other enumeration methods for the estimation of Salmonella and E. coli O157:H7, in samples collected during cattle production and beef processing. SIGNIFICANCE AND IMPACT OF THE STUDY: Use of the methods described here would allow for more routine testing and quantification data collection, providing useful information about the effectiveness of beef processing intervention strategies.  相似文献   

12.
Ethyl glucuronide (EtG) has been shown to be a suitable marker of excessive alcohol consumption. Determination of EtG in hair samples may help to differentiate social drinkers from alcoholics, and this testing can be widely used in forensic science, treatment programs, workplaces, military bases as well as driving ability test to provide legal proof of drinking. A method for determination of EtG in hair samples using large volume injection-gas chromatography-tandem mass spectrometry (LVI-GC/MS/MS) was developed and validated. Hair samples (in 1 mL deionized water) were ultrasonicated for 1h and incubated overnight; these samples were then deproteinated to remove impurities and derivatisated with 15 μL of pyridine and 30 μL of BSTFA. EtG was detected using GC/MS/MS in multiple-reaction monitoring mode. This method exhibited good linearity: y=0.0036 x+0.0437, R2=0.9993, the limit of detection and the limit of quantification were 5 pg/mg and 10 pg/mg, respectively. The extraction recoveries were more than 60%, and the inter-day and intra-day relative standard deviations (RSD) were less than 15%. This method has been applied to the analysis of EtG in hair samples from 21 Chinese subjects. The results for samples obtained from all of those who were teetotallers were negative, and the results for the other 15 samples ranged from 10 to 78 pg/mg, except for one negative sample. These data are the basis for interpretation of alcohol abuse.  相似文献   

13.
The currently dominating hypothetico-deductive research paradigm for ecology has statistical hypothesis testing as a basic element. Classic statistical hypothesis testing does, however, present the ecologist with two fundamental dilemmas when field data are to be analyzed: (1) that the statistically motivated demand for a random and representative sample and the ecologically motivated demand for representation of variation in the study area cannot be fully met at the same time; and (2) that the statistically motivated demand for independence of errors calls for sampling distances that exceed the scales of relevant pattern-generating processes, so that samples with statistically desirable properties will be ecologically irrelevant. Reasons for these dilemmas are explained by consideration of the classic statistical Neyman-Pearson test procedure, properties of ecological variables, properties of sampling designs, interactions between properties of the ecological variables and properties of sampling designs, and specific assumptions of the statistical methods. Analytic solutions to problems underlying the dilemmas are briefly reviewed. I conclude that several important research objectives cannot be approached without subjective elements in sampling designs. I argue that a research strategy entirely based on rigorous statistical testing of hypotheses is insufficient for field ecological data and that inductive and deductive approaches are complementary in the process of building ecological knowledge. I recommend that great care is taken when statistical tests are applied to ecological field data. Use of less formal modelling approaches is recommended for cases when formal testing is not strictly needed. Sets of recommendations, “Guidelines for wise use of statistical tools”, are proposed both for testing and for modelling. Important elements of wise-use guidelines are parallel use of methods that preferably belong to different methodologies, selection of methods with few and less rigorous assumptions, conservative interpretation of results, and abandonment of definitive decisions based a predefined significance level.  相似文献   

14.
A novel approach for statistical analysis of comet assay data (i.e.: tail moment) is proposed, employing public-domain statistical software, the R system. The analytical strategy takes into account that the distribution of comet assay data, like the tail moment, is usually skewed and do not follow a normal distribution. Probability distributions used to model comet assay data included: the Weibull, the exponential, the logistic, the normal, the log normal and log-logistic distribution. In this approach it was also considered that heterogeneity observed among experimental units is a random feature of the comet assay data. This statistical model can be characterized with a location parameter m(ij), a scale parameter r and a between experimental units variability parameter theta. In the logarithmic scale, the parameter m(ij) depends additively on treatment and random effects, as follows: log(m(ij)) = a0 + a1x(ij) + b(i), where exp(a0) represents approximately the mean value of the control group, exp(a1) can be interpreted as the relative risk of damage with respect to the control group, x(ij) is an indicator of experimental group and exp(b(i)) is the individual risk effects assume to follows a Gamma distribution with mean 1 and variance theta. Model selection is based on Akaike's information criteria (AIC). Real data coming from comet analysis of blood samples taken from the flounder Paralichtys orbignyanus (Teleostei: Paralichtyidae) and from samples of cells suspension obtained from the estuarine polychaeta Laeonereis acuta (Nereididae) were employed. This statistical approach showed that the comet assay data should be analyzed under a modeling framework that take into account the important features of these measurements. Model selection and heterogeneity between experimental units play central points in the analysis of these data.  相似文献   

15.
The assessment of overall homogeneity of time‐to‐event curves is a key element in survival analysis. The currently commonly used methods, e.g., log‐rank and Wilcoxon tests, may have a significant loss of statistical testing power under certain circumstances. In this paper a new statistical testing approach is developed to compare the overall homogeneity of survival curves. The proposed new method has greater power than the commonly used tests to detect overall differences between crossing survival curves. The small‐sample performance of the new test is investigated under a variety of situations by means of Monte Carlo simulations. Furthermore, the applicability of the proposed testing approach is illustrated by a real data example from a kidney dialysis trial. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

16.

Background  

Liquid chromatography coupled to mass spectrometry (LC-MS) has become a prominent tool for the analysis of complex proteomics and metabolomics samples. In many applications multiple LC-MS measurements need to be compared, e. g. to improve reliability or to combine results from different samples in a statistical comparative analysis. As in all physical experiments, LC-MS data are affected by uncertainties, and variability of retention time is encountered in all data sets. It is therefore necessary to estimate and correct the underlying distortions of the retention time axis to search for corresponding compounds in different samples. To this end, a variety of so-called LC-MS map alignment algorithms have been developed during the last four years. Most of these approaches are well documented, but they are usually evaluated on very specific samples only. So far, no publication has been assessing different alignment algorithms using a standard LC-MS sample along with commonly used quality criteria.  相似文献   

17.
The Salmonella assay has been in use for almost 15 years and can be defined as a routine test for mutagenicity and for predicting potential carcinogenicity. It detects the majority of animal carcinogens and consequently plays an important role in safety assessment. The test is also routinely used as the frontline screen for environmental samples (complex mixtures) isolated from air, water and food. This role will continue to remain an area of growth as or because sample volumes associated with these testing areas are generally very limited and more extensive testing is generally impossible. While this test, like all others, has some limitations, it is recommended that it be regularly included in all genetic testing batteries.  相似文献   

18.
The introduction of routine testing to detect viral genomes in donated blood was originally driven by requirements for plasma fractionation in relation to exclusion of hepatitis C virus (HCV) RNA. Nevertheless, it was obvious from the outset that a dual standard for fractionated products and individual blood components would be untenable. In many countries therefore, planning for introduction of nucleic acid testing (NAT) of blood incorporated progression to release of HCV RNA tested components. HCV was singled out because of its long seronegative 'window period', relatively high prevalence and incidence in blood donors, rapid burst time and high genome copy number during seroconversion. The latter properties made HCV particularly suitable for detection in pools of samples. If HCV RNA testing is required for release of labile components such as platelets, rapid provision of NAT results is vital because of short shelf life of platelets and the problems of delays when resolving the infectious unit in a reactive pool. For NAT release of labile components smaller sample pool sizes allow faster resolution of RNA positive units. Smaller pools involve high test throughput, the likely need for more testing laboratories and ensuing increased costs. Single sample testing is the ultimate extrapolation of reducing sample pool size. With reduced pool sizes or single sample testing, the option of testing for other viruses (e.g. HIV or HBV) singly or in multiplex also arises. The cost-benefit and incremental yield of such strategies in the light of 'combo' assays for HIV Ag/Ab and the recently described HCV Ag assay will require careful and objective assessment, together with re-appraisal of anti-HBc screening for detection of HBV infected donors at the "tail-end" of carriage.  相似文献   

19.
A method utilizing NMR spectroscopy has been developed to confirm the identity of bacterial polysaccharides used to formulate a polyvalent pneumococcal polysaccharide vaccine. The method is based on 600 MHz proton NMR spectra of individual serotype-specific polysaccharides. A portion of the anomeric region of each spectrum (5.89 to 4.64 ppm) is compared to spectra generated for designated reference samples for each polysaccharide of interest. The selected region offers a spectral window that is unique to a given polysaccharide and is sensitive to any structural alteration of the repeating units. The similarity of any two spectral profiles is evaluated using a correlation coefficient (rho), where rho >/= 0.95 between a sample and reference profile indicates a positive identification of the sample polysaccharide. This method has been shown to be extremely selective in its ability to discriminate between serotype-specific polysaccharides, some of which differ by no more than a single glycosidic linkage. Furthermore, the method is rapid and does not require extensive sample manipulations or pretreatments. The method was validated as a qualitative identity assay and will be incorporated into routine quality control testing of polysaccharide powders to be used in preparation of the polyvalent pneumococcal vaccine PNEUMOVAX 23. The specificity and reproducibility of the NMR-based identity assay is superior to the currently used colorimetric assays and can be readily adapted for use with other bacterial polysaccharide preparations as well.  相似文献   

20.
Chi‐squared test has been a popular approach to the analysis of a 2 × 2 table when the sample sizes for the four cells are large. When the large sample assumption does not hold, however, we need an exact testing method such as Fisher's test. When the study population is heterogeneous, we often partition the subjects into multiple strata, so that each stratum consists of homogeneous subjects and hence the stratified analysis has an improved testing power. While Mantel–Haenszel test has been widely used as an extension of the chi‐squared test to test on stratified 2 × 2 tables with a large‐sample approximation, we have been lacking an extension of Fisher's test for stratified exact testing. In this paper, we discuss an exact testing method for stratified 2 × 2 tables that is simplified to the standard Fisher's test in single 2 × 2 table cases, and propose its sample size calculation method that can be useful for designing a study with rare cell frequencies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号