首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   13篇
  免费   1篇
  2019年   1篇
  2018年   1篇
  2009年   1篇
  2008年   1篇
  2007年   1篇
  2006年   1篇
  2005年   2篇
  2002年   2篇
  2001年   1篇
  1999年   1篇
  1987年   1篇
  1984年   1篇
排序方式: 共有14条查询结果,搜索用时 31 毫秒
1.
Assessing risks involves developing predictive mathematical models, using interpretations of data that are based on scientific assumptions or theories and knowledge of how the data were created. The predictions are used for developing strategies that affect many people in society. Often, it is sufficient that the models that are used are justifiable by a well-accepted set of assumptions or theories, reflecting the state-of-the art science at the time. However, this does not ensure that the “best” decision would be made, nor does it ensure that the decision processes would be fair by ensuring that concerned and affected individuals would be able to participate, effectively presenting arguments on their own behalf. Because of these concerns, procedures of risk analysis, including the management of the process, have been written about, for example, in a National Research Council (NRC 1996 NRC. 1996. Understanding Risk, Informing Decisions in a Democratic Society, Washington, DC, , USA: National Academy Press.  [Google Scholar]) publication, with the intention of getting stakeholders (interested participants) more involved in the risk analysis process. This publication suggests that Risk Characterization be expanded to include an active participation of stakeholders. Such an expansion would affect the risk assessor's approach toward science compared to the present approach, as implied in the seminal NRC (1983) NRC (National Research Council). 1983. Risk Assessment in the Federal Government: Managing the Process, Washington, DC, , USA: National Academy Press.  [Google Scholar] publication. Both of these NRC publications have had great influence on the development of risk analysis management and policy in the United States and elsewhere. Subsequent risk assessment guidance documents have generally relied heavily on these publications, but have focused mainly on managerial attitudes (or policy) toward the uncertainty that is inherent in risk assessment and in communicating to the public the risk assessment conclusions and decisions made from them. Subsequent documents have not, unlike NRC (1996) NRC. 1996. Understanding Risk, Informing Decisions in a Democratic Society, Washington, DC, , USA: National Academy Press.  [Google Scholar], focused on the risk assessors' attitude toward science inference that would better help ensure that risk assessments contain the type of information that could be used to empower stakeholders. Thus, in this Perspective article I focus on the two NRC “foundation documents,” identifying and contrasting two types of approaches toward science, one narrow and the other expansive. The latter approach is designed to increase stakeholders' involvement more than the former. The features of the expansive approach include a contemplative method toward science, where the risk assessor does not express opinions or take a stand regarding the scientific material, but rather considers many possibilities, presents discussions that include direct challenges to assumptions, and uses falsification principles for excluding theories.  相似文献   
2.
How may we choose between conflicting hypotheses when doing a scientific test? Bayesian theory of decision offers a tool to analyse what choosing actually is, preventing us either from giving too much value to rational compounds or from putting aside the scientists’ personal interests and prejudices. Gieres’ cognitive approach to the problem defines the Minimally Open-Minded Scientist’s approach as the right one to dial with the choice dilemma. This approach is used here to give an opinion on the status of the “Orce Man”.  相似文献   
3.
There is over 60 years of discussion in the statistical literature concerning the misuse and limitations of null hypothesis significance tests (NHST). Based on the prevalence of NHST in biological anthropology research, it appears that the discipline generally is unaware of these concerns. The p values used in NHST usually are interpreted incorrectly. A p value indicates the probability of the data given the null hypothesis. It should not be interpreted as the probability that the null hypothesis is true or as evidence for or against any specific alternative to the null hypothesis. P values are a function of both the sample size and the effect size, and therefore do not indicate whether the effect observed in the study is important, large, or small. P values have poor replicability in repeated experiments. The distribution of p values is continuous and varies from 0 to 1.0. The use of a cut‐off, generally p ≤ 0.05, to separate significant from nonsignificant results, is an arbitrary dichotomization of continuous variation. In 2016, the American Statistical Association issued a statement of principles regarding the misinterpretation of NHST, the first time it has done so regarding a specific statistical procedure in its 180‐year history. Effect sizes and confidence intervals, which can be calculated for any data used to calculate p values, provide more and better information about tested hypotheses than p values and NHST.  相似文献   
4.
We propose a structure for presenting risk assessments with the purpose of enhancing the transparency of the selection process of scientific theories and models derived from them. The structure has two stages, with 7 steps, where the stages involve two types of theories: core and auxiliary, which need to be identified in order to explain and evaluate observations and predictions. Core theories are those that are “fundamental” to the phenomena being observed, whereas auxiliary theories are those that describe or explain the actual observation process of the phenomena. The formulation of a scientific theory involves three constitutive components or types of judgments: explanative, evaluative, and regulative or aesthetic, driven by reason. Two perspectives guided us in developing the proposed structure: (1) In a risk assessment explanations based on notions of causality can be used as a tool for developing models and predictions of possible events outside the range of direct experience. The use of causality for development of models is based on judgments, reflecting regulative or aesthetic conceptualizations of different phenomena and how they (should) fit together in the world. (2) Weight of evidence evaluation should be based on falsification principles for excluding models, rather than validation or justification principles that select the best or nearly best-fitting models. Falsification entails discussion that identifies challenges to proposed models, and reconciles apparent inconsistencies between models and data. Based on the discussion of these perspectives the 7 steps of the structure are: the first stage for core theories, (A) scientific concepts, (B) causality network, and (C) mathematical model; and the second stage for auxiliary theories, (D) data interpretation, (E) statistical model, (F) evaluation (weight of evidence), and (G) reconciliation, which includes the actual decision formulation.  相似文献   
5.
In this paper we discuss the epistemological positions of evolution theories. A sharp distinction is made between the theory that species evolved from common ancestors along specified lines of descent (here called the theory of common descent), and the theories intended as causal explanations of evolution (e.g. Lamarck's and Darwin's theory). The theory of common descent permits a large number of predictions of new results that would be improbable without evolution. For instance, (a) phylogenetic trees have been validated now; (b) the observed order in fossils of new species discovered since Darwin's time could be predicted from the theory of common descent; (c) owing to the theory of common descent, the degrees of similarity and difference in newly discovered properties of more or less related species could be predicted. Such observations can be regarded as attempts to falsify the theory of common descent. We conclude that the theory of common descent is an easily-falsifiable & often-tested & still-not-falsified theory, which is the strongest predicate a theory in an empirical science can obtain. Theories intended as causal explanations of evolution can be falsified essentially, and Lamarck's theory has been falsified actually. Several elements of Darwin's theory have been modified or falsified: new versions of a theory of evolution by natural selection are now the leading scientific theories on evolution. We have argued that the theory of common descent and Darwinism are ordinary, falsifiable scientific theories.  相似文献   
6.
Published justifications for weighting characters in parsimony analyses vary tremendously. Some authors argue for weighting a posteriori, some for a priori, and especially those authors that rely on a falsificationist approach to systematics argue for non-weighting. To find a decision, while following the falsificationist approach, one first has to investigate the necessary conditions for the possibility of phylogenetic research to establish an empirical science sensu Popper. A concept of phylogenetic homology together with the criterion of identity is proposed, which refers to the genealogical relations between individual organisms. From this concept a differentiation of the terms character and character state is proposed, defining each character as a single epistemological argument for the reconstruction of a unique transformation event. Synapomorphy is distinguished from homology by referring to the relationship between species instead of individual organisms, thus the set of all synapomorphies constitutes a subset of the set of all homologies. By examining the structure of characteristics during character analysis and hypothesizing specific types of transformations responsible for having caused them, a specific degree of severity is assigned to each identity test. It thus provides a specific degree of corroboration for every hypothesis that successfully passed this test. Since the congruence criterion tests hypotheses of synapomorphy against each other on grounds of their degree of corroboration gained from the identity test, these different degrees of corroboration determine the specific weights given to characters and character state transformations before the cladistic analysis. This provides a reasonable justification for an a priori weighting scheme within a falsificationist approach to phylogeny. It also demonstrates the indispensable necessity of its application.  相似文献   
7.
A comparison is made among all the models proposed to explain the origin of the tRNA molecule. The conclusion reached is that, for the model predicting that the tRNA molecule originated after the assembly of two hairpin-like structures, molecular fossils have been found in the half-genes of the tRNAs of Nanoarchaeum equitans. These might be the witnesses of the transition stage predicted by the model through which the evolution of the tRNA molecule passed, thus providing considerable corroboration for this model.  相似文献   
8.
9.
10.
The central challenge from the Precautionary Principle to statistical methodology is to help delineate (preferably quantitatively) the possibility that some exposure is hazardous, even in cases where this is not established beyond reasonable doubt. The classical approach to hypothesis testing is unhelpful, because lack of significance can be due either to uninformative data or to genuine lack of effect (the Type II error problem). Its inversion, bioequivalence testing, might sometimes be a model for the Precautionary Principle in its ability to ‘prove the null hypothesis.’ Current procedures for setting safe exposure levels are essentially derived from these classical statistical ideas, and we outline how uncertainties in the exposure and response measurements affect the No Observed Adverse Effect Level (NOAEL), the Benchmark approach and the “Hockey Stick” model. A particular problem concerns model uncertainty: usually these procedures assume that the class of models describing dose/response is known with certainty; this assumption is however often violated, perhaps particularly often when epidemiological data form the source of the risk assessment, and regulatory authorities have occasionally resorted to some average based on competing models. The recent methodology of Bayesian model averaging might be a systematic version of this, but is this an arena for the Precautionary Principle to come into play?  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号