首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.

Background  

Genomics and proteomics analyses regularly involve the simultaneous test of hundreds of hypotheses, either on numerical or categorical data. To correct for the occurrence of false positives, validation tests based on multiple testing correction, such as Bonferroni and Benjamini and Hochberg, and re-sampling, such as permutation tests, are frequently used. Despite the known power of permutation-based tests, most available tools offer such tests for either t-test or ANOVA only. Less attention has been given to tests for categorical data, such as the Chi-square. This project takes a first step by developing an open-source software tool, Ptest, that addresses the need to offer public software tools incorporating these and other statistical tests with options for correcting for multiple hypotheses.  相似文献   

2.
Many authors apply statistical tests to sets of relevés obtained using non-random methods to investigate phytosociological and ecological relationships. Frequently applied tests include thet-test, ANOVA, Mann-Whitney test, Kruskal-Wallis test, chi-square test (of independence, goodness-of-fit, and homogeneity), Kolmogorov-Smirnov test, concentration analysis, tests of linear correlation and Spearman rank correlation coefficient, computer intensive methods (such as randomization and re-sampling) and others. I examined the extent of reliability of the results of such tests applied to non-random data by examining the tests requirements according to statistical theory. I conclude that when used for such data, the statistical tests do not provide reliable support for the inferences made because non-randomness of samples violated the demand for observations to be independent, and different parts of the investigated communities did not have equal chance to be represented in the sample. Additional requirements, e.g. of normality and homoscedasticity, were also neglected in several cases. The importance of data satisfying the basic requirements set by statistical tests is stressed.  相似文献   

3.
Various tests on trend in toxicological studies are reviewed, and the advantages and the disadvantages involved with such procedures are discussed. New tests are introduced to overcome difficulties common to many current procedures, such as downturns at higher doses, and missing numerical availability.  相似文献   

4.
Summary .  Regression models are often used to test for cause-effect relationships from data collected in randomized trials or experiments. This practice has deservedly come under heavy scrutiny, because commonly used models such as linear and logistic regression will often not capture the actual relationships between variables, and incorrectly specified models potentially lead to incorrect conclusions. In this article, we focus on hypothesis tests of whether the treatment given in a randomized trial has any effect on the mean of the primary outcome, within strata of baseline variables such as age, sex, and health status. Our primary concern is ensuring that such hypothesis tests have correct type I error for large samples. Our main result is that for a surprisingly large class of commonly used regression models, standard regression-based hypothesis tests (but using robust variance estimators) are guaranteed to have correct type I error for large samples, even when the models are incorrectly specified. To the best of our knowledge, this robustness of such model-based hypothesis tests to incorrectly specified models was previously unknown for Poisson regression models and for other commonly used models we consider. Our results have practical implications for understanding the reliability of commonly used, model-based tests for analyzing randomized trials.  相似文献   

5.
Most biologists agree that at each succeeding level of biological organization new properties appear that would not have been evident even by the most intense and careful examination of lower levels of organization. These levels might be crudely characterized as subcellular, cellular, organ, organism, population, multispecies, community, and ecosystem. The field of ecology developed because even the most meticulous study of single species could not accurately predict how several such species might interact competitively or in predator-prey interactions and the like. Moreover, interactions of biotic and abiotic materials at the level of organization called ecosystem are so complex that they could not be predicted from a detailed examination of isolated component parts. This preamble may seem platitudinous to most biologists who have heard this many times before. This makes it all the more remarkable that in the field of toxicity testing an assumption is made that responses at levels of biological organization above single species can be reliably predicted with single species toxicity tests. Unfortunately, this assumption is rarely explicitly stated and, therefore, often passes unchallenged. When the assumption is challenged, a response is that single species tests have been used for years and no adverse ecosystem or multispecies effects were noted. This could be because single species tests are overly protective when coupled with an enormous application factor or that such effects were simply not detected because there were no systematic, scientifically sound studies carried out to detect them. Probably both of these possibilities occur. However, the important factor is that no scientifically justifiable evidence exists to indicate that degree of reliability with which one may use single species tests to predict responses at higher levels of biological organization. One might speculate that the absence of such information is due to the paucity of reliable tests at higher levels of organization. This situation certainly exists but does not explain the lack of pressure to develop such tests. The most pressing need in the field of toxicity testing is not further perfection of single species tests, but rather the development of parallel tests at higher levels of organization. These need not be inordinately expensive, time consuming, or require any more skilled professionals than single species tests. Higher level tests merely require a different type of biological background. Theoretical ecologists have been notoriously reluctant to contribute to this effort, and, as a consequence, such tests must be developed by associations of professional biologists and other organizations with similar interests.  相似文献   

6.
The most common tests for types and antitypes in configural frequency analysis are normal approximations of exact tests. In the paper such statistics under the complete independence model and under the fixed margins model are discussed. It turns out that these test statistics are not acceptable when the number of simultaneously performed tests is large or when the expected frequencies are small. In these cases, the use of exact tests is advocated and some existing computer programs for such tests are indicated. A normal approximation based on the strong version of the De Moivre-Laplace limit theorem is also discussed. Empirical examples are given from longitudinal data describing psychological development of boys.  相似文献   

7.
For nonnormal data we suggest a test of location based on a broader family of distributions than normality. Such a test will in a sense fall between the standard parametric and non parametric tests. We see that the Wald tests based on this family of distributions have some advantages over the score tests and that they perform well in comparison to standard parametric and nonparametric tests in a variety of situations. We also consider when and how to apply such tests in practice.  相似文献   

8.
Limited access to diagnostic services and the poor performance of current tests result in a failure to detect millions of tuberculosis cases each year. An accurate test that could be used at the point of care to allow faster initiation of treatment would decrease death rates and could reduce disease transmission. Previous attempts to develop such a test have failed, and success will require the marriage of biomarkers that are highly predictive for the disease with innovative technology that is reliable and affordable. Here, we review the status of research into point-of-care tests for active tuberculosis and discuss barriers to the development of such diagnostic tests.  相似文献   

9.
W W Piegorsch 《Biometrics》1990,46(2):309-316
Dichotomous response models are common in many experimental settings. Often, concomitant explanatory variables are recorded, and a generalized linear model, such as a logit model, is fit. In some cases, interest in specific model parameters is directed only at one-sided departures from some null effect. In these cases, procedures can be developed for testing the null effect against a one-sided alternative. These include Bonferroni-type adjustments of univariate Wald tests, and likelihood ratio tests that employ inequality-constrained multivariate theory. This paper examines such tests of significance. Monte Carlo evaluations are undertaken to examine the small-sample properties of the various procedures. The procedures are seen to perform fairly well, generally achieving their nominal sizes at total sample sizes near 100 experimental units. Extensions to the problem of one-sided tests against a control or standard are also considered.  相似文献   

10.
Senn S 《Biometrics》2007,63(1):296-298
A proposal to improve trend tests by using noninteger scores is examined. It is concluded that despite improved power such tests are usually inferior to the simpler integer scored approach.  相似文献   

11.
A total of 387 yeasts from the contents of the digestive tracts of domestic animals and poultry were identified by slide agglutination tests using factor antisera and urease tests. The results of this serological test were very satisfactory with respect to accuracy and rapidity, particularly when performed in combination with concomitant physiological tests only for assimilation of inositol and potassium nitrate. It may be concluded that such a combination of serological and biological tests is very useful for identifying yeast strains from various sources.  相似文献   

12.
The past year has seen remarkable translation of cellular and gene therapies, with U.S. Food and Drug Administration (FDA) approval of three chimeric antigen receptor (CAR) T-cell products, multiple gene therapy products, and the initiation of countless other pivotal clinical trials. What makes these new drugs most remarkable is their path to commercialization: they have unique requirements compared with traditional pharmaceutical drugs and require different potency assays, critical quality attributes and parameters, pharmacological and toxicological data, and in vivo efficacy testing. What's more, each biologic requires its own unique set of tests and parameters. Here we describe the unique tests associated with ex vivo–expanded tumor-associated antigen T cells (TAA-T). These tests include functional assays to determine potency, specificity, and identity; tests for pathogenic contaminants, such as bacteria and fungus as well as other contaminants such as Mycoplasma and endotoxin; tests for product characterization, tests to evaluate T-cell persistence and product efficacy; and finally, recommendations for critical quality attributes and parameters associated with the expansion of TAA-Ts.  相似文献   

13.
Medical tests are essential to diagnosis and therapeutic management. Treating physicians may consider immediate results valuable, but obtaining such results require time management and the availability of specific tests. Reliability studies and cost-benefit analyses make it possible to assess the appropriateness of these practices.  相似文献   

14.
Amplified DNA technology such as the polymerase chain reaction (PCR) and ligase chain reaction (LCR) are new techniques for the diagnosis of genital chlamydial infections in both men and women. These tests are highly sensitive and specific in detecting chlamydial genes in different specimen types such as genital samples as well as in non-invasive specimens such as urine and vulval smears. Due to the advantage of a high reliability of these techniques even when they are performed on non-invasive specimen types, amplification tests allow chlamydial diagnosis for screening especially high risk persons as the basis of chlamydia control programs.  相似文献   

15.
Perspective: detecting adaptive molecular polymorphism: lessons from the MHC   总被引:13,自引:0,他引:13  
Abstract. In the 1960s, when population geneticists first began to collect data on the amount of genetic variation in natural populations, balancing selection was invoked as a possible explanation for how such high levels of molecular variation are maintained. However, the predictions of the neutral theory of molecular evolution have since become the standard by which cases of balancing selection may be inferred. Here we review the evidence for balancing selection acting on the major histocompatibility complex (MHC) of vertebrates, a genetic system that defies many of the predictions of neutrality. We apply many widely used tests of neutrality to MHC data as a benchmark for assessing the power of these tests. These tests can be categorized as detecting selection in the current generation, over the history of populations, or over the histories of species. We find that selection is not detectable in MHC datasets in every generation, population, or every evolutionary lineage. This suggests either that selection on the MHC is heterogeneous or that many of the current neutrality tests lack sufficient power to detect the selection consistently. Additionally, we identify a potential inference problem associated with several tests of neutrality. We demonstrate that the signals of selection may be generated in a relatively short period of microevolutionary time, yet these signals may take exceptionally long periods of time to be erased in the absence of selection. This is especially true for the neutrality test based on the ratio of nonsynonymous to synonymous substitutions. Inference of the nature of the selection events that create such signals should be approached with caution. However, a combination of tests on different time scales may overcome such problems.  相似文献   

16.
The methodological analysis of the main problems of serological diagnosis has made it possible to pick out the tests of the agglutination type, especially those made with the use of sensitized erythrocytes, as most suitable for mass use. The comparison made between different agglutination tests has confirmed the fact that the use of sensitized erythrocytes in such tests is highly effective. The comparison of the diagnostic possibilities of agglutination tests involving the use of erythrocytic reagents with those offered by enzyme immunoassays has demonstrated that tests based on the phenomenon of passive hemagglutination have great possibilities and hold considerable promise not only for mass immunological surveys, but also for research work.  相似文献   

17.
Wu L  Gilbert PB 《Biometrics》2002,58(4):997-1004
At the present time, many AIDS clinical trials compare drug therapies by a time-to-event primary endpoint that measures the durability of suppression of HIV replication. For such studies, survival differences tend to occur early and/or late in the follow-up period due to drug differences in initial potency and/or durability of efficacy, and detecting these differences is of primary interest. We propose a weighted log-rank statistic that emphasizes early and/or late survival differences. We also consider some versatile tests that also emphasize these differences but are sensitive to a wider range of alternatives. The performances of the new tests are evaluated in numerical studies. For the alternatives of interest, the new tests show greater power and flexibility than commonly used weighted log-rank tests and related versatile tests. When the main interest is in detecting early and/or late survival differences, these tests may be preferable to the other versatile and weighted log-rank tests that have been studied.  相似文献   

18.
Disease screening is a fundamental part of health care. To evaluate the accuracy of a new screening modality, ideally the results of the screening test are compared with those of a definitive diagnostic test in a set of study subjects. However, definitive diagnostic tests are often invasive and cannot be applied to subjects whose screening tests are negative for disease. For example, in cancer screening, the assessment of true disease status requires a biopsy sample, which for ethical reasons can only be obtained if a subject's screening test indicates presence of cancer. Although the absolute accuracy of screening tests cannot be evaluated in such circumstances, it is possible to compare the accuracies of screening tests. Specifically, using relative true positive rate (the ratio of the true positive rate of one test to another) and relative false positive rate (the ratio of the false positive rates of two tests) as measures of relative accuracy, we show that inference about relative accuracy can be made from such studies. Analogies with case-control studies can be drawn where inference about absolute risk cannot be made, but inference about relative risk can. In this paper, we develop a marginal regression analysis framework for making inference about relative accuracy when only screen positives are followed for true disease. In this context factors influencing the relative accuracies of tests can be evaluated. It is important to determine such factors in order to understand circumstances in which one test is preferable to another. The methods are applied to two cancer screening studies, one concerning the effect of race on screening for prostate cancer and the other concerning the effect of tumour grade on the detection of cervical cancer with cytology versus cervicography screening.  相似文献   

19.
S Geisser  W Johnson 《Biometrics》1992,48(3):839-852
We consider the problem of deciding optimally whether a characteristic exists based on one or two screening tests. We discuss the relative merits of giving either one or two tests, including the order in which they might be given, as well as their costs. Operating in the Bayesian mode, we derive posterior distributions for the accuracies of the tests and the prevalence of the characteristic. Applications to detecting rare conditions, such as the AIDS virus, are discussed.  相似文献   

20.
Achaz G 《Genetics》2008,179(3):1409-1424
Many data sets one could use for population genetics contain artifactual sites, i.e., sequencing errors. Here, we first explore the impact of such errors on several common summary statistics, assuming that sequencing errors are mostly singletons. We thus show that in the presence of those errors, estimators of can be strongly biased. We further show that even with a moderate number of sequencing errors, neutrality tests based on the frequency spectrum reject neutrality. This implies that analyses of data sets with such errors will systematically lead to wrong inferences of evolutionary scenarios. To avoid to these errors, we propose two new estimators of theta that ignore singletons as well as two new tests Y and Y* that can be used to test neutrality despite sequencing errors. All in all, we show that even though singletons are ignored, these new tests show some power to detect deviations from a standard neutral model. We therefore advise the use of these new tests to strengthen conclusions in suspicious data sets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号