首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The use of multiple hypothesis testing procedures has been receiving a lot of attention recently by statisticians in DNA microarray analysis. The traditional FWER controlling procedures are not very useful in this situation since the experiments are exploratory by nature and researchers are more interested in controlling the rate of false positives rather than controlling the probability of making a single erroneous decision. This has led to increased use of FDR (False Discovery Rate) controlling procedures. Genovese and Wasserman proposed a single-step FDR procedure that is an asymptotic approximation to the original Benjamini and Hochberg stepwise procedure. In this paper, we modify the Genovese-Wasserman procedure to force the FDR control closer to the level alpha in the independence setting. Assuming that the data comes from a mixture of two normals, we also propose to make this procedure adaptive by first estimating the parameters using the EM algorithm and then using these estimated parameters into the above modification of the Genovese-Wasserman procedure. We compare this procedure with the original Benjamini-Hochberg and the SAM thresholding procedures. The FDR control and other properties of this adaptive procedure are verified numerically.  相似文献   

2.
In both animal and human behavioral repertoires, classical expected utility theory is considered a fundamental element of decision making under conditions of uncertainty. This theory has been widely applied to problems of animal behavior and evolutionary game theory, as well as to human economic behavior. The Allais paradox hinges on the expression of avoidance of bankruptcy by humans, or death by starvation in animals. This paradox reveals that human behavioral patterns are often inconsistent with predictions under the classical expected utility theory as formulated by von Neumann and Morgenstern. None of the many attempts to reformulate utility theory has been entirely successful in resolving this paradox with rigorous logic. We present a simple, but novel approach to the theory of decision making, in which utility is dependent on current wealth, and in which losses are more heavily weighted than gains. Our approach resolves the Allais paradox in a manner that is consistent with how humans formulate decisions under uncertainty. Our results indicate that animals, including humans, are in principle risk-averse. Our restructuring of dynamic utility theory presents a basic optimization scheme for sequential or dynamic decisions in both animals and humans.  相似文献   

3.
Humans and animals face decision tasks in an uncertain multi-agent environment where an agent''s strategy may change in time due to the co-adaptation of others strategies. The neuronal substrate and the computational algorithms underlying such adaptive decision making, however, is largely unknown. We propose a population coding model of spiking neurons with a policy gradient procedure that successfully acquires optimal strategies for classical game-theoretical tasks. The suggested population reinforcement learning reproduces data from human behavioral experiments for the blackjack and the inspector game. It performs optimally according to a pure (deterministic) and mixed (stochastic) Nash equilibrium, respectively. In contrast, temporal-difference(TD)-learning, covariance-learning, and basic reinforcement learning fail to perform optimally for the stochastic strategy. Spike-based population reinforcement learning, shown to follow the stochastic reward gradient, is therefore a viable candidate to explain automated decision learning of a Nash equilibrium in two-player games.  相似文献   

4.
There is tremendous scientific and medical interest in the use of biomarkers to better facilitate medical decision making. In this article, we present a simple framework for assessing the predictive ability of a biomarker. The methodology requires use of techniques from a subfield of survival analysis termed semi-competing risks; results are presented to make the article self-contained. As we show in the article, one natural interpretation of semi-competing risks model is in terms of modifying the classical risk set approach to survival analysis that is more germane to medical decision making. A crucial parameter for evaluating biomarkers is the predictive hazard ratio, which is different from the usual hazard ratio from Cox regression models for right-censored data. This quantity will be defined; its estimation, inference, and adjustment for covariates will be discussed. Aspects of causal inference related to these procedures will also be described. The methodology is illustrated with an evaluation of serum albumin in terms of predicting death in patients with primary biliary cirrhosis.  相似文献   

5.
In candidate gene association studies, usually several elementary hypotheses are tested simultaneously using one particular set of data. The data normally consist of partly correlated SNP information. Every SNP can be tested for association with the disease, e.g., using the Cochran-Armitage test for trend. To account for the multiplicity of the test situation, different types of multiple testing procedures have been proposed. The question arises whether procedures taking into account the discreteness of the situation show a benefit especially in case of correlated data. We empirically evaluate several different multiple testing procedures via simulation studies using simulated correlated SNP data. We analyze FDR and FWER controlling procedures, special procedures for discrete situations, and the minP-resampling-based procedure. Within the simulation study, we examine a broad range of different gene data scenarios. We show that the main difference in the varying performance of the procedures is due to sample size. In small sample size scenarios,the minP-resampling procedure though controlling the stricter FWER even had more power than the classical FDR controlling procedures. In contrast, FDR controlling procedures led to more rejections in higher sample size scenarios.  相似文献   

6.
Determination of the relative gene order on chromosomes is of critical importance in the construction of human gene maps. In this paper we develop a sequential algorithm for gene ordering. We start by comparing three sequential procedures to order three genes on the basis of Bayesian posterior probabilities, maximum-likelihood ratio, and minimal recombinant class. In the second part of the paper we extend sequential procedure based on the posterior probabilities to the general case of g genes. We present a theorem that states that the predicted average probability of committing a decision error, associated with a Bayesian sequential procedure that accepts the hypothesis of a gene-order configuration with posterior probability equal to or greater than pi *, is smaller than 1 - pi *. This theorem holds irrespective of the number of genes, the genetic model, and the source of genetic information. The theorem is an extension of a classical result of Wald, concerning the sum of the actual and the nominal error probabilities in the sequential probability ratio test of two hypotheses. A stepwise strategy for ordering a large number of genes, with control over the decision-error probabilities, is discussed. An asymptotic approximation is provided, which facilitates the calculations with existing computer software for gene mapping, of the posterior probabilities of an order and the error probabilities. We illustrate with some simulations that the stepwise ordering is an efficient procedure.  相似文献   

7.
Risk-based decision making requires that the decision makers and stakeholders are informed of all risks that are potentially significant and relevant to the decision. The International Programme on Chemical Safety of the World Health Organization has developed a framework for integrating the assessment of human health and ecological risks. However, other types of integration are needed to support particular environmental decisions. They are integration of exposure and effects, of multiple chemicals and other hazardous agents, of multiple routes of exposure, of multiple endpoints, multiple receptors, multiple spatial and temporal scales, a product's life cycle, management alternatives, and socioeconomics with risk assessment. Inclusion of all these factors in an integrated assessment could lead to paralysis by analysis. Therefore, it is important that assessors be cognizant of the decision process and that decision makers and those who will influence the decision (stakeholders) be involved in planning the assessment to ensure that the degree of integration is necessary and sufficient.  相似文献   

8.
If the aim of an LCA is to support decisions or to generate and evaluate ideas for future decisions, the allocation procedure should generally be effect-oriented rather than cause-oriented. It is important that the procedure be acceptable to decision makers expected to use the LCA results. It is also an advantage if the procedure is easy to apply. Applicability appears to be in conflict with accurate reflection of effect-oriented causalities. To make LCA a more efficient tool for decision support, a range of feasible allocation procedures that reflect the consequences of inflows and outflows of cascade materials is required.  相似文献   

9.
Korol A  Frenkel Z  Cohen L  Lipkin E  Soller M 《Genetics》2007,176(4):2611-2623
Selective DNA pooling (SDP) is a cost-effective means for an initial scan for linkage between marker and quantitative trait loci (QTL) in suitable populations. The method is based on scoring marker allele frequencies in DNA pools from the tails of the population trait distribution. Various analytical approaches have been proposed for QTL detection using data on multiple families with SDP analysis. This article presents a new experimental procedure, fractioned-pool design (FPD), aimed to increase the reliability of SDP mapping results, by "fractioning" the tails of the population distribution into independent subpools. FPD is a conceptual and structural modification of SDP that allows for the first time the use of permutation tests for QTL detection rather than relying on presumed asymptotic distributions of the test statistics. For situations of family and cross mapping design we propose a spectrum of new tools for QTL mapping in FPD that were previously possible only with individual genotyping. These include: joint analysis of multiple families and multiple markers across a chromosome, even when the marker loci are only partly shared among families; detection of families segregating (heterozygous) for the QTL; estimation of confidence intervals for the QTL position; and analysis of multiple-linked QTL. These new advantages are of special importance for pooling analysis with SNP chips. Combining SNP microarray analysis with DNA pooling can dramatically reduce the cost of screening large numbers of SNPs on large samples, making chip technology readily applicable for genomewide association mapping in humans and farm animals. This extension, however, will require additional, nontrivial, development of FPD analytical tools.  相似文献   

10.
The risk assessment process is a critical function for deployment toxicology research. It is essential to the decision making process related to establishing risk reduction procedures and for formulating appropriate exposure levels to protect naval personnel from potentially hazardous chemicals in the military that could result in a reduction in readiness operations. These decisions must be based on quality data from well-planned laboratory animal studies that guide the judgements, which result in effective risk characterization and risk management. The process of risk assessment in deployment toxicology essentially uses the same principles as civilian risk assessment, but adds activities essential to the military mission, including intended and unintended exposure to chemicals and chemical mixtures. Risk assessment and Navy deployment toxicology data are integrated into a systematic and well-planned approach to the organization of scientific information. The purpose of this paper is to outline the analytical framework used to develop strategies to protect the health of deployed Navy forces.  相似文献   

11.
The classical multiple testing model remains an important practical area of statistics with new approaches still being developed. In this paper we develop a new multiple testing procedure inspired by a method sometimes used in a problem with a different focus. Namely, the inference after model selection problem. We note that solutions to that problem are often accomplished by making use of a penalized likelihood function. A classic example is the Bayesian information criterion (BIC) method. In this paper we construct a generalized BIC method and evaluate its properties as a multiple testing procedure. The procedure is applicable to a wide variety of statistical models including regression, contrasts, treatment versus control, change point, and others. Numerical work indicates that, in particular, for sparse models the new generalized BIC would be preferred over existing multiple testing procedures.  相似文献   

12.
Stochastic dynamic programming (SDP) or Markov decision processes (MDP) are increasingly being used in ecology to find the best decisions over time and under uncertainty so that the chance of achieving an objective is maximised. To date, few programs are available to solve SDP/MDP. We present MDPtoolbox, a multi‐platform set of functions to solve Markov decision problems (MATLAB, GNU Octave, Scilab and R). MDPtoolbox provides state‐of‐the‐art and ready to use algorithms to solve a wide range of MDPs. MDPtoolbox is easy to use, freely available and has been continuously improved since 2004. We illustrate how to use MDPtoolbox on a dynamic reserve design problem.  相似文献   

13.
B I Graubard  T R Fears  M H Gail 《Biometrics》1989,45(4):1053-1071
We consider population-based case-control designs in which controls are selected by one of three cluster sampling plans from the entire population at risk. The effects of cluster sampling on classical epidemiologic procedures are investigated, and appropriately modified procedures are developed. In particular, modified procedures for testing the homogeneity of odds ratios across strata, and for estimating and testing a common odds ratio are presented. Simulations that use the data from the 1970 Health Interview Survey as a population suggest that classical procedures may be fairly robust in the presence of cluster sampling. A more extreme example based on a mixed multinomial model clearly demonstrates that the classical Mantel-Haenszel (1959, Journal of the National Cancer Institute 22, 719-748) and Woolf-Haldane tests of no exposure effect may have sizes exceeding nominal levels and confidence intervals with less than nominal coverage under an alternative hypothesis. Classical estimates of odds ratios may also be biased with non-self-weighting cluster samples. The modified procedures we propose remedy these defects.  相似文献   

14.
15.
Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings.  相似文献   

16.
Protocols for genomic DNA extraction from plants are generally lengthily, since they require that tissues be ground in liquid nitrogen, followed by a precipitation step, washing and drying of the DNA pellet, etc. This represents a major challenge especially when several hundred samples must be screened/analyzed within a working day. There is therefore a need for a rapid and simple procedure, which will produce DNA quality suitable for various analyses. Here, we describe a time and cost efficient protocol for genomic DNA isolation from plants suitable for all routine genetic screening/analyses. The protocol is free from hazardous reagents and therefore safe to be executed by non-specialists. With this protocol more than 100 genomic DNA samples could manually be extracted within a working day, making it a promising alternative in genetic studies of large-scale genomic screening projects.  相似文献   

17.
Triacylglycerol (TAG) is a major storage reserve in many plant seeds. We previously identified a TAG lipase mutant called sugar-dependent1 (sdp1) that is impaired in TAG hydrolysis following Arabidopsis (Arabidopsis thaliana) seed germination (Eastmond, 2006). The aim of this study was to identify additional lipases that account for the residual TAG hydrolysis observed in sdp1. Mutants were isolated in three candidate genes (SDP1-LIKE [SDP1L], ADIPOSE TRIGLYCERIDE LIPASE-LIKE, and COMPARATIVE GENE IDENTIFIER-58-LIKE). Analysis of double, triple, and quadruple mutants showed that SDP1L is responsible for virtually all of the residual TAG hydrolysis present in sdp1 seedlings. Oil body membranes purified from sdp1 sdp1L seedlings were deficient in TAG lipase activity but could still hydrolyze di- and monoacylglycerol. SDP1L is expressed less strongly than SDP1 in seedlings. However, SDP1L could partially rescue TAG breakdown in sdp1 seedlings when expressed under the control of the SDP1 or 35S promoters and in vitro assays showed that both SDP1 and SDP1L can hydrolyze TAG, in preference to diacylglycerol or monoacylglycerol. Seed germination was slowed in sdp1 sdp1L and postgerminative seedling growth was severely retarded. The frequency of seedling establishment was also reduced, but sdp1 sdp1L was not seedling lethal under normal laboratory growth conditions. Our data show that together SDP1 and SDP1L account for at least 95% of the rate of TAG hydrolysis in Arabidopsis seeds, and that this hydrolysis is important but not essential for seed germination or seedling establishment.  相似文献   

18.
Meta-analysis techniques, if applied appropriately, can provide a summary of the totality of evidence regarding an overall difference between a new treatment and a control group using data from multiple comparative clinical studies. The standard meta-analysis procedures, however, may not give a meaningful between-group difference summary measure or identify a meaningful patient population of interest, especially when the fixed-effect model assumption is not met. Moreover, a single between-group comparison measure without a reference value obtained from patients in the control arm would likely not be informative enough for clinical decision making. In this paper, we propose a simple, robust procedure based on a mixture population concept and provide a clinically meaningful group contrast summary for a well-defined target population. We use the data from a recent meta-analysis for evaluating statin therapies with respect to the incidence of fatal stroke events to illustrate the issues associated with the standard meta-analysis procedures as well as the advantages of our simple proposal.  相似文献   

19.
A procedure for the assay of antibodies in sera based on the application of the antigen as a spot to nitrocellulose filters is described. The method has the merit of being simpler in operation and more sensitive than comparable existing procedures. Applications for screening supernatants of hybridomas making monoclonal antibodies, and the use of such antibodies in the determination of the tissue distribution of the corresponding antigens, are described. An application for the screening of human pathological sera for multiple antibodies in one operation is also described.  相似文献   

20.
The development of oncology drugs progresses through multiple phases, where after each phase, a decision is made about whether to move a molecule forward. Early phase efficacy decisions are often made on the basis of single-arm studies based on a set of rules to define whether the tumor improves (“responds”), remains stable, or progresses (response evaluation criteria in solid tumors [RECIST]). These decision rules are implicitly assuming some form of surrogacy between tumor response and long-term endpoints like progression-free survival (PFS) or overall survival (OS). With the emergence of new therapies, for which the link between RECIST tumor response and long-term endpoints is either not accessible yet, or the link is weaker than with classical chemotherapies, tumor response-based rules may not be optimal. In this paper, we explore the use of a multistate model for decision-making based on single-arm early phase trials. The multistate model allows to account for more information than the simple RECIST response status, namely, the time to get to response, the duration of response, the PFS time, and time to death. We propose to base the decision on efficacy on the OS hazard ratio (HR) comparing historical control to data from the experimental treatment, with the latter predicted from a multistate model based on early phase data with limited survival follow-up. Using two case studies, we illustrate feasibility of the estimation of such an OS HR. We argue that, in the presence of limited follow-up and small sample size, and making realistic assumptions within the multistate model, the OS prediction is acceptable and may lead to better early decisions within the development of a drug.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号