首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Weight‐of‐evidence is the process by which multiple measurement endpoints are related to an assessment endpoint to evaluate whether significant risk of harm is posed to the environment. In this paper, a methodology is offered for reconciling or balancing multiple lines of evidence pertaining to an assessment endpoint. Weight‐of‐evidence is reflected in three characteristics of measurement endpoints: (a) the weight assigned to each measurement endpoint; (b) the magnitude of response observed in the measurement endpoint; and (c) the concurrence among outcomes of multiple measurement endpoints. First, weights are assigned to measurement endpoints based on attributes related to: (a) strength of association between assessment and measurement endpoints; (b) data quality; and (c) study design and execution. Second, the magnitude of response in the measurement endpoint is evaluated with respect to whether the measurement endpoint indicates the presence or absence of harm; as well as the magnitude. Third, concurrence among measurement endpoints is evaluated by plotting the findings of the two preceding steps on a matrix for each measurement endpoint evaluated. The matrix allows easy visual examination of agreements or divergences among measurement endpoints, facilitating interpretation of the collection of measurement endpoints with respect to the assessment endpoint. A qualitative adaptation of the weight‐of‐evidence approach is also presented.  相似文献   

3.
Most existing phase II clinical trial designs focus on conventional chemotherapy with binary tumor response as the endpoint. The advent of novel therapies, such as molecularly targeted agents and immunotherapy, has made the endpoint of phase II trials more complicated, often involving ordinal, nested, and coprimary endpoints. We propose a simple and flexible Bayesian optimal phase II predictive probability (OPP) design that handles binary and complex endpoints in a unified way. The Dirichlet-multinomial model is employed to accommodate different types of endpoints. At each interim, given the observed interim data, we calculate the Bayesian predictive probability of success, should the trial continue to the maximum planned sample size, and use it to make the go/no-go decision. The OPP design controls the type I error rate, maximizes power or minimizes the expected sample size, and is easy to implement, because the go/no-go decision boundaries can be enumerated and included in the protocol before the onset of the trial. Simulation studies show that the OPP design has satisfactory operating characteristics.  相似文献   

4.
Surrogate marker evaluation from an information theory perspective   总被引:1,自引:0,他引:1  
Alonso A  Molenberghs G 《Biometrics》2007,63(1):180-186
The last 20 years have seen lots of work in the area of surrogate marker validation, partly devoted to frame the evaluation in a multitrial framework, leading to definitions in terms of the quality of trial- and individual-level association between a potential surrogate and a true endpoint (Buyse et al., 2000, Biostatistics 1, 49-67). A drawback is that different settings have led to different measures at the individual level. Here, we use information theory to create a unified framework, leading to a definition of surrogacy with an intuitive interpretation, offering interpretational advantages, and applicable in a wide range of situations. Our method provides a better insight into the chances of finding a good surrogate endpoint in a given situation. We further show that some of the previous proposals follow as special cases of our method. We illustrate our methodology using data from a clinical study in psychiatry.  相似文献   

5.
Often a treatment is assessed by co‐primary endpoints so that a comprehensive picture of the treatment effect can be obtained. Co‐primary endpoints can be different medical assessments angled at different aspects of a disease, therefore, are used collectively to strengthen evidence for the treatment effect. It is common sense that if a treatment is ineffective, the chance to show that the treatment is effective in all co‐primary endpoints should be small. Therefore, it may not be necessary to require all the co‐primary endpoints to be statistically significant at the 1‐sided 0.025 level to control the error rate of wrongly approving an ineffective treatment. Rather it is reasonable to allow certain variation for the p ‐values within a range close to 0.025. In this paper, statistical methods are developed to derive decision rules to evaluate co‐primary endpoints collectively. The decision rules control the error rate of wrongly accepting an ineffective treatment at the level of 0.025 for a study and the error rate at a slightly higher level for a treatment that works for all the co‐primary endpoints except perhaps one. The decision rules also control the error rates for individual endpoints. Potential applications in clinical trials are presented (© 2009 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

6.
GRAY  ROBERT J. 《Biometrika》1994,81(3):527-539
This paper considers incorporating information on disease progressionin the analysis of survival. A three-state model is assumed,with the distribution of each transition estimated separately.The distribution of survival following progression can dependon the time of progression. Kernel methods are used to giveconsistent estimators under general forms of dependence. Theestimators for the individual transitions are then combinedinto an overall estimator of the survival distribution. A teststatistic for equality of survival between treatment groupsis proposed based on the tests of Pepe & Fleming (1989,1991). In simulations the kernel method successfully incorporateddependence on the time of progression in some reasonable settings,but under extreme forms of dependence the tests had substantialbias. If survival beyond progression can be predicted fairlyaccurately, then gains in power over standard methods that ignoreprogression can be substantial, but the gains are smaller whensurvival beyond progression is more variable. The methodologyis illustrated with an application to a breast cancer clinicaltrial.  相似文献   

7.
To adequately protect aquatic ecosystems from impactby anthropogenic perturbations it is necessary todistinguish what is safe from what is not. Thisreview examines approaches to this problem in relationto primary and secondary effects of pesticides.Understanding nutrient – plankton and plankton –plankton interrelationships on both spatial andtemporal scales is important if secondary or indirecteffects are to be assessed. Before defining ormeasuring a toxicity endpoint, consideration must begiven to whether to use single species or multispeciestests. Each has its strengths and weaknesses and isreviewed. In single species testing, toxicityendpoints can be more clearly defined butextrapolation of effects to an ecosystem is moredifficult than with multispecies testing and can oftenlead to incorrect conclusions. Interpretation ofmultispecies testing results are challenging andnumerical analysis techniques including methods whoseobjectives are inference, classification andordination are required. Conceptual and fuzzy logicmodelling techniques promise a solution to theinterpretation of multispecies tests.  相似文献   

8.
The U.S. Environmental Protection Agency determined that one of the major impediments to the advancement and application of ecological risk assessment is doubt concerning appropriate assessment endpoints. The Agency's Risk Assessment Forum determined that the best solution to this problem was to define a set of generic ecological assessment endpoints (GEAEs). They are assessment endpoints that are applicable to a wide range of ecological risk assessments; because they reflect the programmatic goals of the Agency, they are applicable to a wide array of environmental issues, and they may be estimated using existing assessment tools. They are not specifically defined for individual cases; some ad hoc elaboration by users is expected. The GEAEs are not exhaustive or mandatory. Although most of the Agency's ecological decisions have been based on organism-level effects, GEAEs are also defined for populations, ecosystems, and special places.  相似文献   

9.
The functional importance of bacteria and fungi in terrestrial systems is recognized widely. However, microbial population, community, and functional measurement endpoints change rapidly and across very short spatial scales. Measurement endpoints of microbes tend to be highly responsive to typical fluxes of temperature, moisture, oxygen, and many other noncontaminant factors. Functional redundancy across broad taxonomic groups enables wild swings in community composition without remarkable change in rates of decomposition or community respiration. Consequently, it is exceedingly difficult to relate specific microbial activities with indications of adverse and unacceptable environmental conditions. Moreover, changes in microbial processes do not necessarily result in consequences to plant and animal populations or communities, which in the end are the resources most commonly identified as those to be protected. Therefore, unless more definitive linkages are made between specific microbial effects and an adverse condition for typical assessment endpoint species, microbial endpoints will continue to have limited use in risk assessments; they will not drive the process as primary assessment endpoints.  相似文献   

10.
Toxicology studies the interactions of a chemical substance with individual organisms, whereas ecotoxicology is a multidisciplinary approach incorporating ecology and other disciplines, e.g. chemistry, microbiology, etc., to determine responses of individuals, populations and whole ecosystems to stressors such as chemicals. We present here the current status of toxicity testing in South Africa and propose a future prognosis for such tests. We propose a path forward for the development of ecotoxicology in South Africa and also globally. Toxicity testing issues dealt with are the use of surrogate species as opposed to indigenous species, their comparative tolerances, and the selection of relevant endpoints as measures of toxicity. Ecotoxicological considerations need to address the following key ecological realities: tolerance (both physiological acclimation and genetic adaptation), trophic redundancies, resilience, compensation (e.g. density dependence), evolution, and recovery. We believe that predictive ecotoxicology will play a major role in the future management of ecosystems that are constantly changing. We also believe that such management must be proactive to the point of intervention to create desired change, specifically the maintenance of ecosystem services.  相似文献   

11.
Recent developments proposed by Lin (1991) which allow for the sequential analysis of multivariate failure time data, are discussed in the context of clinical trials with marker process data. In particular, a marker based response observed as a univariate failure time variable is considered in a univariate manner and in combination with the primary failure time variable through global measures of treatment effect. A weighting scheme is introduced that is designed to make greater use of the marker process responses when they are consistent with the responses for the primary endpoint. Univariate analyses and analyses based on global test statistics are evaluated with respect to duration of study and type I and type II error rates for a variety of relative risk parameter configurations. The simulation study is designed to extend the work of Machado, Gail and Ellenberg (1990) to sequential trials and to assess the performance of the global test statistics in this context. The global measures are found to be preferable over the univariate primary endpoint analyses when the treatment affects all transition intensities in a consistent manner. Further, there are advantages over the univariate marker based analysis when the treatment effects on some transition intensities are in opposite directions. Such a scenario remains problematic however with the relative error rates determined to a large extent by the baseline weighting scheme.  相似文献   

12.
Primary sclerosing cholangitis is an enigmatic disease affecting the bile ducts, eventually leading to liver failure necessitating liver transplantation in many cases. There is currently no therapy that has proven to halt disease progression. One of the reasons for this is the lack of proper endpoints to measure the effect of medical intervention on the course of the disease. Relevant clinical endpoints such as death or liver transplantation occur too infrequently in this orphan disease to be used as endpoints in phase 2 or 3 trials. It is therefore of utmost importance to identify appropriate surrogate endpoints that are reasonably likely to measure true clinical benefit. This article will discuss a number of surrogate endpoints that are likely candidates to serve this role. This article is part of a Special Issue entitled: Cholangiocytes in Health and Diseaseedited by Jesus Banales, Marco Marzioni, Nicholas LaRusso and Peter Jansen.  相似文献   

13.
14.
While epidemiological data typically contain a multivariate response and often also multiple exposure parameters, current methods for safe dose calculations, including the widely used benchmark approach, rely on standard regression techniques. In practice, dose-response modeling and calculation of the exposure limit are often based on the seemingly most sensitive outcome. However, this procedure ignores other available data, is inefficient, and fails to account for multiple testing. Instead, risk assessment could be based on structural equation models, which can accommodate both a multivariate exposure and a multivariate response function. Furthermore, such models will allow for measurement error in the observed variables, which is a requirement for unbiased estimation of the benchmark dose. This methodology is illustrated with the data on neurobehavioral effects in children prenatally exposed to methylmercury, where results based on standard regression models cause an underestimation of the true risk.  相似文献   

15.
We review microcosm toxicity tests with 12 chemical stresses and find that the relative sensitivity of certain endpoints is consistent over toxicant type. Changes in species composition occur at very low levels of chronic stress. Endpoints responding at increasing levels of stress are declines in species numbers relative to expected numbers, followed by decreased oxygen production and decreased total production. Other endpoints are quite sensitive in response to some toxicants but insensitive to others (e.g., autotrophic biomass). In addition, other endpoints respond unpredictably to stress, showing stimulation under some conditions and impairment under others. We compare our observations to the progressions of impact suggested from published whole ecosystem experiments and speculate about a general ecosystem distress syndrome and the implications for choosing endpoints in both toxicity testing and monitoring.  相似文献   

16.
17.
We propose a method to construct simultaneous confidence intervals for a parameter vector from inverting a series of randomization tests (RT). The randomization tests are facilitated by an efficient multivariate Robbins–Monro procedure that takes the correlation information of all components into account. The estimation method does not require any distributional assumption of the population other than the existence of the second moments. The resulting simultaneous confidence intervals are not necessarily symmetric about the point estimate of the parameter vector but possess the property of equal tails in all dimensions. In particular, we present the constructing the mean vector of one population and the difference between two mean vectors of two populations. Extensive simulation is conducted to show numerical comparison with four methods. We illustrate the application of the proposed method to test bioequivalence with multiple endpoints on some real data.  相似文献   

18.
19.
Risk assessment and uncertainty analysis are important tools for improving environmental decision making. However, their value is limited when the environmental endpoints assessed by scientists do not coincide with the publicly-meaningful attributes that are of concern to decision makers. Approaches for addressing this disconnect are presented using examples from water quality assessment and management. Recommendations to scientists for maximizing the usefulness of uncertainty analysis are given.  相似文献   

20.
Two statistical methods for determining the precision of best-fit model parameters generated from chemical rate of release data are discussed. One method uses the likelihood theory to estimate marginal confidence intervals and joint confidence regions of the release model parameters. The other method uses Monte Carlo simulation to estimate statistical inferences for the release model parameters. Both methods were applied to a set of rate of release data that was generated using a field soil. The results of this evaluation indicate that the precision of F (the fraction of a chemical in a soil that is released quickly) is greater than the precision of k1 (the rate constant describing fast release), which is greater than the precision of k2 (the rate constant describing slow release). This occurs because more data are taken during the time period described by F and k1 than during the time period described by F and k2. In general, estimates of F will be relatively precise when the ratio of k1 to k2 is large, estimates of k1 for soil/chemical matrices with a high F will be relatively precise, and estimates of k2 for soil/chemical matrices with a low F will be relatively precise, provided that sufficient time is allowed for full release.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号