首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ryman N  Jorde PE 《Molecular ecology》2001,10(10):2361-2373
A variety of statistical procedures are commonly employed when testing for genetic differentiation. In a typical situation two or more samples of individuals have been genotyped at several gene loci by molecular or biochemical means, and in a first step a statistical test for allele frequency homogeneity is performed at each locus separately, using, e.g. the contingency chi-square test, Fisher's exact test, or some modification thereof. In a second step the results from the separate tests are combined for evaluation of the joint null hypothesis that there is no allele frequency difference at any locus, corresponding to the important case where the samples would be regarded as drawn from the same statistical and, hence, biological population. Presently, there are two conceptually different strategies in use for testing the joint null hypothesis of no difference at any locus. One approach is based on the summation of chi-square statistics over loci. Another method is employed by investigators applying the Bonferroni technique (adjusting the P-value required for rejection to account for the elevated alpha errors when performing multiple tests simultaneously) to test if the heterogeneity observed at any particular locus can be regarded significant when considered separately. Under this approach the joint null hypothesis is rejected if one or more of the component single locus tests is considered significant under the Bonferroni criterion. We used computer simulations to evaluate the statistical power and realized alpha errors of these strategies when evaluating the joint hypothesis after scoring multiple loci. We find that the 'extended' Bonferroni approach generally is associated with low statistical power and should not be applied in the current setting. Further, and contrary to what might be expected, we find that 'exact' tests typically behave poorly when combined in existing procedures for joint hypothesis testing. Thus, while exact tests are generally to be preferred over approximate ones when testing each particular locus, approximate tests such as the traditional chi-square seem preferable when addressing the joint hypothesis.  相似文献   

2.
Toxicity testing for regulatory purposes raises the question of test selection for a particular endpoint. Given the public's concern for animal welfare, test selection is a multi-objective decision problem that requires balancing information outcome, animal welfare loss, and monetary testing costs. This paper demonstrates the applicability of cost-effectiveness analysis as a decision-support tool for test selection in a regulatory context such as, for example, the new European chemicals legislation (REACH). We distinguish different decision-making perspectives, in particular the regulator's and chemical industry's perspectives, and we discuss how cost-effectiveness analysis can be applied to test selection from both perspectives. Furthermore, we show how animal welfare goals can be included in cost-effectiveness analyses, and we provide a three-dimensional extension to the standard cost-effectiveness analysis if animal welfare loss cannot be valued in monetary terms. To illustrate how cost-effectiveness analysis works in different settings, we apply our model to a simple case of selecting short-term tests for mutagenicity. We demonstrate that including sufficiently high values for animal welfare induces cost-effective replacements of animal testing. Furthermore, we show that the regulator and chemical companies face different tradeoffs when selecting tests. This may lead to different choices of tests or testing systems.  相似文献   

3.
The part played by time in ethics is often taken for granted, yet time is essential to moral decision making. This paper looks at time in ethical decisions about having a genetic test. We use a patient-centred approach, combining empirical research methods with normative ethical analysis to investigate the patients' experience of time in (i) prenatal testing of a foetus for a genetic condition, (ii) predictive or diagnostic testing for breast and colon cancer, or (iii) testing for Huntington's disease (HD). We found that participants often manipulated their experience of time, either using a stepwise process of microdecisions to extend it or, under the time pressure of pregnancy, changing their temporal 'depth of field'. We discuss the implications of these strategies for normative concepts of moral agency, and for clinical ethics.  相似文献   

4.
Identifying subgroups of patients with an enhanced response to a new treatment has become an area of increased interest in the last few years. When there is knowledge about possible subpopulations with an enhanced treatment effect before the start of a trial it might be beneficial to set up a testing strategy, which tests for a significant treatment effect not only in the full population, but also in these prespecified subpopulations. In this paper, we present a parametric multiple testing approach for tests in multiple populations for dose-finding trials. Our approach is based on the MCP-Mod methodology, which uses multiple comparison procedures (MCPs) to test for a dose–response signal, while considering multiple possible candidate dose–response shapes. Our proposed methods allow for heteroscedastic error variances between populations and control the family-wise error rate over tests in multiple populations and for multiple candidate models. We show in simulations that the proposed multipopulation testing approaches can increase the power to detect a significant dose–response signal over the standard single-population MCP-Mod, when the specified subpopulation has an enhanced treatment effect.  相似文献   

5.
In two‐stage group sequential trials with a primary and a secondary endpoint, the overall type I error rate for the primary endpoint is often controlled by an α‐level boundary, such as an O'Brien‐Fleming or Pocock boundary. Following a hierarchical testing sequence, the secondary endpoint is tested only if the primary endpoint achieves statistical significance either at an interim analysis or at the final analysis. To control the type I error rate for the secondary endpoint, this is tested using a Bonferroni procedure or any α‐level group sequential method. In comparison with marginal testing, there is an overall power loss for the test of the secondary endpoint since a claim of a positive result depends on the significance of the primary endpoint in the hierarchical testing sequence. We propose two group sequential testing procedures with improved secondary power: the improved Bonferroni procedure and the improved Pocock procedure. The proposed procedures use the correlation between the interim and final statistics for the secondary endpoint while applying graphical approaches to transfer the significance level from the primary endpoint to the secondary endpoint. The procedures control the familywise error rate (FWER) strongly by construction and this is confirmed via simulation. We also compare the proposed procedures with other commonly used group sequential procedures in terms of control of the FWER and the power of rejecting the secondary hypothesis. An example is provided to illustrate the procedures.  相似文献   

6.
In data analysis using dimension reduction methods, the main goal is to summarize how the response is related to the covariates through a few linear combinations. One key issue is to determine the number of independent, relevant covariate combinations, which is the dimension of the sufficient dimension reduction (SDR) subspace. In this work, we propose an easily-applied approach to conduct inference for the dimension of the SDR subspace, based on augmentation of the covariate set with simulated pseudo-covariates. Applying the partitioning principal to the possible dimensions, we use rigorous sequential testing to select the dimensionality, by comparing the strength of the signal arising from the actual covariates to that appearing to arise from the pseudo-covariates. We show that under a “uniform direction” condition, our approach can be used in conjunction with several popular SDR methods, including sliced inverse regression. In these settings, the test statistic asymptotically follows a beta distribution and therefore is easily calibrated. Moreover, the family-wise type I error rate of our sequential testing is rigorously controlled. Simulation studies and an analysis of newborn anthropometric data demonstrate the robustness of the proposed approach, and indicate that the power is comparable to or greater than the alternatives.  相似文献   

7.
Much forensic inference based upon DNA evidence is made assuming Hardy-Weinberg Equilibrium (HWE) for the genetic loci being used. Several statistical tests to detect and measure deviation from HWE have been devised, and their limitations become more obvious when testing for deviation within multiallelic DNA loci. The most popular methods-Chi-square and Likelihood-ratio tests-are based on asymptotic results and cannot guarantee a good performance in the presence of low frequency genotypes. Since the parameter space dimension increases at a quadratic rate on the number of alleles, some authors suggest applying sequential methods, where the multiallelic case is reformulated as a sequence of "biallelic" tests. However, in this approach it is not obvious how to assess the general evidence of the original hypothesis; nor is it clear how to establish the significance level for its acceptance/rejection. In this work, we introduce a straightforward method for the multiallelic HWE test, which overcomes the aforementioned issues of sequential methods. The core theory for the proposed method is given by the Full Bayesian Significance Test (FBST), an intuitive Bayesian approach which does not assign positive probabilities to zero measure sets when testing sharp hypotheses. We compare FBST performance to Chi-square, Likelihood-ratio and Markov chain tests, in three numerical experiments. The results suggest that FBST is a robust and high performance method for the HWE test, even in the presence of several alleles and small sample sizes.  相似文献   

8.
Controlling for the multiplicity effect is an essential part of determining statistical significance in large-scale single-locus association genome scans on Single Nucleotide Polymorphisms (SNPs). Bonferroni adjustment is a commonly used approach due to its simplicity, but is conservative and has low power for large-scale tests. The permutation test, which is a powerful and popular tool, is computationally expensive and may mislead in the presence of family structure. We propose a computationally efficient and powerful multiple testing correction approach for Linkage Disequilibrium (LD) based Quantitative Trait Loci (QTL) mapping on the basis of graphical weighted-Bonferroni methods. The proposed multiplicity adjustment method synthesizes weighted Bonferroni-based closed testing procedures into a powerful and versatile graphical approach. By tailoring different priorities for the two hypothesis tests involved in LD based QTL mapping, we are able to increase power and maintain computational efficiency and conceptual simplicity. The proposed approach enables strong control of the familywise error rate (FWER). The performance of the proposed approach as compared to the standard Bonferroni correction is illustrated by simulation and real data. We observe a consistent and moderate increase in power under all simulated circumstances, among different sample sizes, heritabilities, and number of SNPs. We also applied the proposed method to a real outbred mouse HDL cholesterol QTL mapping project where we detected the significant QTLs that were highlighted in the literature, while still ensuring strong control of the FWER.  相似文献   

9.
The conventional nonparametric tests in survival analysis, such as the log‐rank test, assess the null hypothesis that the hazards are equal at all times. However, hazards are hard to interpret causally, and other null hypotheses are more relevant in many scenarios with survival outcomes. To allow for a wider range of null hypotheses, we present a generic approach to define test statistics. This approach utilizes the fact that a wide range of common parameters in survival analysis can be expressed as solutions of differential equations. Thereby, we can test hypotheses based on survival parameters that solve differential equations driven by cumulative hazards, and it is easy to implement the tests on a computer. We present simulations, suggesting that our tests perform well for several hypotheses in a range of scenarios. As an illustration, we apply our tests to evaluate the effect of adjuvant chemotherapies in patients with colon cancer, using data from a randomized controlled trial.  相似文献   

10.
Hung MC  Swallow WH 《Biometrics》2000,56(1):204-212
In group testing, the test unit consists of a group of individuals. If the group test is positive, then one or more individuals in the group are assumed to be positive. A group observation in binomial group testing can be, say, the test result (positive or negative) for a pool of blood samples that come from several different individuals. It has been shown that, when the proportion (p) of infected individuals is low, group testing is often preferable to individual testing for identifying infected individuals and for estimating proportions of those infected. We extend the potential applications of group testing to hypothesis-testing problems wherein one wants to test for a relationship between p and a classification or quantitative covariable. Asymptotic relative efficiencies (AREs) of tests based on group testing versus the usual individual testing are obtained. The Pitman ARE strongly favors group testing in many cases. Small-sample results from simulation studies are given and are consistent with the large-sample (asymptotic) findings. We illustrate the potential advantages of group testing in hypothesis testing using HIV-1 seroprevalence data.  相似文献   

11.
In high‐dimensional omics studies where multiple molecular profiles are obtained for each set of patients, there is often interest in identifying complex multivariate associations, for example, copy number regulated expression levels in a certain pathway or in a genomic region. To detect such associations, we present a novel approach to test for association between two sets of variables. Our approach generalizes the global test, which tests for association between a group of covariates and a single univariate response, to allow high‐dimensional multivariate response. We apply the method to several simulated datasets as well as two publicly available datasets, where we compare the performance of multivariate global test (G2) with univariate global test. The method is implemented in R and will be available as a part of the globaltest package in R.  相似文献   

12.
Monte‐Carlo simulation methods are commonly used for assessing the performance of statistical tests under finite sample scenarios. They help us ascertain the nominal level for tests with approximate level, e.g. asymptotic tests. Additionally, a simulation can assess the quality of a test on the alternative. The latter can be used to compare new tests and established tests under certain assumptions in order to determinate a preferable test given characteristics of the data. The key problem for such investigations is the choice of a goodness criterion. We expand the expected p‐value as considered by Sackrowitz and Samuel‐Cahn (1999) to the context of univariate equivalence tests. This presents an effective tool to evaluate new purposes for equivalence testing because of its independence of the distribution of the test statistic under null‐hypothesis. It helps to avoid the often tedious search for the distribution under null‐hypothesis for test statistics which have no considerable advantage over yet available methods. To demonstrate the usefulness in biometry a comparison of established equivalence tests with a nonparametric approach is conducted in a simulation study for three distributional assumptions.  相似文献   

13.
Summary Chrysanthemum morifolium Ramat. plants were grown in a soil mix fertilized daily with a balanced solution containing N, P, K, Ca, and Mg at 4 rates which were 0.5, 1, 2, and 4 times. At the end of 4 weeks of vegetative growth, the above-ground portions of the plants were analyzed for elemental content, and the soil mix was analyzed by 3 soil testing procedures. The N, P, and K contents of chrysanthemum were positively correlated with the reported values of these nutrients in the soil as determined by the Spurway, Penn State, and Intensity-Balance soil tests. Magnesium, as in the Penn State and Intensity-Balance soil tests, was negatively correlated with plant Mg content; however, Ca was not significantly correlated with plant Ca in the Penn State test and negatively correlated in the Intensity-Balance test. The magnitude of the correlation coefficients between nutrient content of the plants, and the soil test value of the nutrient by all three soil tests were similar indicating that all three soil tests can be used.Paper No. 5686 in the Journal Series of the Pennsylvania Agricultural Experiment Station.  相似文献   

14.
Binomial group testing involves pooling individuals into groups and observing a binary response on each group. Results from the group tests can then be used to draw inference about population proportions. Its use as an experimental design has received much attention in recent years, especially in public‐health screening experiments and vector‐transfer designs in plant pathology. We investigate the benefits of group testing in situations wherein one desires to test whether or not probabilities are increasingly ordered across the levels of an observed qualitative covariate, i.e., across strata of a population or among treatment levels. We use a known likelihood ratio test for individual testing, but extend its use to group‐testing situations to show the increases in power conferred by using group testing when operating in this constrained parameter space. We apply our methods to data from an HIV study involving male subjects classified as intraveneous drug users.  相似文献   

15.
Extracellular vesicles (EVs) derived from mesenchymal stromal cells (MSCs) may deliver therapeutic effects that are comparable to their parental cells. MSC-EVs are promising agents for the treatment of a variety of diseases. To reach the intermediate goal of clinically testing safety and efficacy of EVs, strategies should strive for efficient translation of current EV research. On the basis of our in vitro an in vivo findings regarding the biological actions of EVs and our experience in manufacturing biological stem cell therapeutics for routine use and clinical testing, we discuss strategies of manufacturing and quality control of umbilical cord–derived MSC-EVs. We introduce guidelines of good manufacturing practice and their practicability along the path from the laboratory to the patient. We present aspects of manufacturing and final product quality testing and highlight the principle of “The process is the product.” The approach presented in this perspective article may facilitate translational research during the development of complex biological EV-based therapeutics in a very early stage of manufacturing as well as during early clinical safety and proof-of-concept testing.  相似文献   

16.
Improving the welfare of farm animals depends on our knowledge on how they perceive and interpret their environment; the latter depends on their cognitive abilities. Hence, limited knowledge of the range of cognitive abilities of farm animals is a major concern. An effective approach to explore the cognitive range of a species is to apply automated testing devices, which are still underdeveloped in farm animals. In screen-like studies, the uses of automated devices are few in domestic hens. We developed an original fully automated touchscreen device using digital computer-drawn colour pictures and independent sensible cells adapted for cognitive testing in domestic hens, enabling a wide range of test types from low to high complexity. This study aimed to test the efficiency of our device using two cognitive tests. We focused on tasks related to adaptive capacities to environmental variability, such as flexibility and generalisation capacities as this is a good start to approach more complex cognitive capacities. We implemented a serial reversal learning task, categorised as a simple cognitive test, and a delayed matching-to-sample (dMTS) task on an identity concept, followed by a generalisation test, categorised as more complex. In the serial reversal learning task, the hens performed equally for the two changing reward contingencies in only three reversal stages. In the dMTS task, the hens increased their performance rapidly throughout the training sessions. Moreover, to the best of our knowledge, we present the first positive result of identity concept generalisation in a dMTS task in domestic hens. Our results provide additional information on the behavioural flexibility and concept understanding of domestic hens. They also support the idea that fully automated devices would improve knowledge of farm animals’ cognition.  相似文献   

17.
The complex molecular networks in the cell can give rise to surprising interactions: gene deletions that are synthetically lethal, gene overexpressions that promote stemness or differentiation, synergistic drug interactions that heighten potency. Yet, the number of actual interactions is dwarfed by the number of potential interactions, and discovering them remains a major problem. Pooled screening, in which multiple factors are simultaneously tested for possible interactions, has the potential to increase the efficiency of searching for interactions among a large set of factors. However, pooling also carries with it the risk of masking genuine interactions due to antagonistic influence from other factors in the pool. Here, we explore several theoretical models of pooled screening, allowing for synergy and antagonism between factors, noisy measurements, and other forms of uncertainty. We investigate randomized sequential designs, deriving formulae for the expected number of tests that need to be performed to discover a synergistic interaction, and the optimal size of pools to test. We find that even in the presence of significant antagonistic interactions and testing noise, randomized pooled designs can significantly outperform exhaustive testing of all possible combinations. We also find that testing noise does not affect optimal pool size, and that mitigating noise by a selective approach to retesting outperforms naive replication of all tests. Finally, we show that a Bayesian approach can be used to handle uncertainty in problem parameters, such as the extent of synergistic and antagonistic interactions, resulting in schedules for adapting pool size during the course of testing.  相似文献   

18.
Health-related direct-to-consumer (DTC) genetic testing has been a controversial practice. Especially problematic is predictive testing for Alzheimer disease (AD), since the disease is incurable, prevention is inconclusive, and testing does not definitively predict an individual's future disease status. In this paper, I examine two contrasting cases of subjects who learn through genetic testing that they have an elevated risk of developing AD later in life. In these cases, the subject's emotional response to the result is related to how well prepared she was for the real-life personal implications of possible test results. Analysis leads to the conclusion that when groups of health-related genetic tests are offered as packages by DTC companies, informed consumer choice is rendered impossible. Moreover, I argue, this marketing approach contravenes US Federal Trade Commission policies for non-deceptive commercial communications. I conclude by suggesting ways to improve the prospects for informed consumer choice in DTC testing.  相似文献   

19.
In many areas of the world, Potato virus Y (PVY) is one of the most economically important disease problems in seed potatoes. In Taiwan, generation 2 (G2) class certified seed potatoes are required by law to be free of detectable levels of PVY. To meet this standard, it is necessary to perform accurate tests at a reasonable cost. We used a two‐stage testing design involving group testing which was performed in Taiwan's Seed Improvement and Propagation Station to identify plants infected with PVY. At the first stage of this two‐stage testing design, plants are tested in groups. The second stage involves no retesting for negative test groups and exhaustive testing of all constituent individual samples from positive test groups. In order to minimise costs while meeting government standards, it is imperative to estimate optimal group size. However, because of limited test accuracy, classification errors for diagnostic tests are inevitable; to get a more accurate estimate, it is necessary to adjust for these errors. Therefore, this paper describes an analysis of diagnostic test data in which specimens are grouped for batched testing to offset costs. The optimal batch size is determined by various cost parameters as well as test sensitivity, specificity and disease prevalence. Here, the Bayesian method is employed to deal with uncertainty in these parameters. Moreover, we developed a computer program to determine optimal group size for PVY tests such that the expected cost is minimised even when using imperfect diagnostic tests of pooled samples. Results from this research show that, compared with error free testing, when the presence of diagnostic testing errors is taken into account, the optimal group size becomes smaller. Higher diagnostic testing costs, lower costs of false negatives or smaller prevalence can all lead to a larger optimal group size. Regarding the effects of sensitivity and specificity, optimal group size increases as sensitivity increases; however, specificity has little effect on determining optimal group size. From our simulated study, it is apparent that the Bayesian method can truly update the prior information to more closely approximate the intrinsic characteristics of the parameters of interest. We believe that the results of this study will be useful in the implementation of seed potato certification programmes, particularly those which require zero tolerance for quarantine diseases in certified tubers.  相似文献   

20.
Hellmich M 《Biometrics》2001,57(3):892-898
In order to benefit from the substantial overhead expenses of a large group sequential clinical trial, the simultaneous investigation of several competing treatments becomes more popular. If at some interim analysis any treatment arm reveals itself to be inferior to any other treatment under investigation, this inferior arm may be or may even need to be dropped for ethical and/or economic reasons. Recently proposed methods for monitoring and analysis of group sequential clinical trials with multiple treatment arms are compared and discussed. The main focus of the article is on the application and extension of (adaptive) closed testing procedures in the group sequential setting that strongly control the familywise error rate. A numerical example is given for illustration.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号