首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The regulation of human exposure to potentially carcinogenic chemicals constitutes society's most important use of animal carcinogenicity data. Environmental contaminants of greatest concern within the USA are listed in the Environmental Protection Agency's (EPA's) Integrated Risk Information System (IRIS) chemicals database. However, of the 160 IRIS chemicals lacking even limited human exposure data but possessing animal data that had received a human carcinogenicity assessment by 1 January 2004, we found that in most cases (58.1%; 93/160), the EPA considered animal carcinogenicity data inadequate to support a classification of probable human carcinogen or non-carcinogen. For the 128 chemicals with human or animal data also assessed by the World Health Organisation's International Agency for Research on Cancer (IARC), human carcinogenicity classifications were compatible with EPA classifications only for those 17 having at least limited human data (p = 0.5896). For those 111 primarily reliant on animal data, the EPA was much more likely than the IARC to assign carcinogenicity classifications indicative of greater human risk (p < 0.0001). The IARC is a leading international authority on carcinogenicity assessments, and its significantly different human carcinogenicity classifications of identical chemicals indicate that: 1) in the absence of significant human data, the EPA is over-reliant on animal carcinogenicity data; 2) as a result, the EPA tends to over-predict carcinogenic risk; and 3) the true predictivity for human carcinogenicity of animal data is even poorer than is indicated by EPA figures alone. The EPA policy of erroneously assuming that tumours in animals are indicative of human carcinogenicity is implicated as a primary cause of these errors.  相似文献   

2.
Due to limited human exposure data, risk classification and the consequent regulation of exposure to potential carcinogens has conventionally relied mainly upon animal tests. However, several investigations have revealed animal carcinogenicity data to be lacking in human predictivity. To investigate the reasons for this, we surveyed 160 chemicals possessing animal but not human exposure data within the US Environmental Protection Agency chemicals database, but which had received human carcinogenicity assessments by 1 January 2004. We discovered the use of a wide variety of species, with rodents predominating, and of a wide variety of routes of administration, and that there were effects on a particularly wide variety of organ systems. The likely causes of the poor human predictivity of rodent carcinogenicity bioassays include: 1) the profound discordance of bioassay results between rodent species, strains and genders, and further, between rodents and human beings; 2) the variable, yet substantial, stresses caused by handling and restraint, and the stressful routes of administration common to carcinogenicity bioassays, and their effects on hormonal regulation, immune status and predisposition to carcinogenesis; 3) differences in rates of absorption and transport mechanisms between test routes of administration and other important human routes of exposure; 4) the considerable variability of organ systems in response to carcinogenic insults, both between and within species; and 5) the predisposition of chronic high dose bioassays toward false positive results, due to the overwhelming of physiological defences, and the unnatural elevation of cell division rates during ad libitum feeding studies. Such factors render profoundly difficult any attempts to accurately extrapolate human carcinogenic hazards from animal data.  相似文献   

3.
In a series of papers, Ames and colleagues allege that the scientific and public health communities have perpetuated a series of 'misconceptions' that resulted in inaccurate identification of chemicals that pose potential human cancer risks, and misguided cancer prevention strategies and regulatory policies. They conclude that exposures to industrial and synthetic chemicals represent negligible cancer risks and that animal studies have little or no scientific value for assessing human risks. Their conclusions are based on flawed and untested assumptions. For instance, they claim that synthetic residues on food can be ignored because 99.99% of pesticides humans eat are natural, chemicals in plants are pesticides, and their potential to cause cancer equals that of synthetic pesticides. Similarly, Ames does not offer any convincing scientific evidence to justify discrediting bioassays for identifying human carcinogens. Ironically, their arguments center on a ranking procedure that relies on the same experimental data and extrapolation methods they criticize as being unreliable for evaluating cancer risks. We address their inconsistencies and flaws, and present scientific facts and our perspectives surrounding Ames' nine alleged misconceptions. Our conclusions agree with the International Agency for Research on Cancer, the National Toxicology Program, and other respected scientific organizations: in the absence of human data, animal studies are the most definitive for assessing human cancer risks. Animal data should not be ignored, and precautions should be taken to lessen human exposures. Dismissing animal carcinogenicity findings would lead to human cancer cases as the only means of demonstrating carcinogenicity of environmental agents. This is unacceptable public health policy.  相似文献   

4.
Conventional animal carcinogenicity tests take around three years to design, conduct and interpret. Consequently, only a tiny fraction of the thousands of industrial chemicals currently in use have been tested for carcinogenicity. Despite the costs of hundreds of millions of dollars and millions of skilled personnel hours, as well as millions of animal lives, several investigations have revealed that animal carcinogenicity data lack human specificity (i.e. the ability to identify human non-carcinogens), which severely limits the human predictivity of the bioassay. This is due to the scientific inadequacies of many carcinogenicity bioassays, and numerous serious biological obstacles, which render profoundly difficult any attempts to accurately extrapolate animal data in order to predict carcinogenic hazards to humans. Proposed modifications to the conventional bioassays have included the elimination of mice as a second species, and the use of genetically-altered or neonatal mice, decreased study durations, initiation-promotion models, the greater incorporation of toxicokinetic and toxicodynamic assessments, structure-activity relationship (computerised) systems, in vitro assays, cDNA microarrays for detecting changes in gene expression, limited human clinical trials, and epidemiological research. The potential advantages of non-animal assays when compared to bioassays include the superior human specificity of the results, substantially reduced time-frames, and greatly reduced demands on financial, personnel and animal resources. Inexplicably, however, the regulatory agencies have been frustratingly slow to adopt alternative protocols. In order to decrease the enormous cost of cancer to society, a substantial redirection of resources away from excessively slow and resource-intensive rodent bioassays, into the further development and implementation of non-animal assays, is both strongly justified and urgently required.  相似文献   

5.
The public understands and supports the ethical use of human subjects in medical research, recognizing the unique role for this type of study in the development of new drugs and therapeutic strategies for treatment of disease. The use of data from human subjects can also be of value in understanding the circumstances under which individuals exposed to chemicals in the food supply, in the workplace, or in the environment might experience toxicity, i.e., in support of risk assessment. However, questions have been raised as to whether this latter type of research is ethical, or can be performed in an ethical manner. Under what circumstances is it acceptable to intentionally expose human subjects to potentially toxic agents? This is an extremely important issue for the risk assessment community to address, because it affects in a fundamental way the types of information that will be available to conduct human health risk assessments. Four papers in this issue offer viewpoints on the value of human data, the circumstances under which human subjects might be exposed to toxic chemicals for research purposes, the ethical problems associated with this research, and the role of human vs. animal data in the development of toxicity values for human health risk assessment  相似文献   

6.
A new international project to evaluate the relevance for human systemic and local toxicity of in vitro tests of general toxicity of chemicals has been organized by the Scandinavian Society of Cell Toxicology under the title Multicenter Evaluation of In Vitro Cytotoxicity (MEIC). The basic assumptions underlying the project, as well as the practical goals and the design of the program are outlined. The list of the first 50 reference chemicals is presented. The chemicals are an otherwise unbiased selection of compounds with known human acutely lethal dosage and blood concentrations, including LD50-values in the rat or mouse. Most agents also have other data on human toxicity and toxicokinetics, including more extensive animal toxicity data. International laboratories already using or developing in vitro tests of various partial aspects of general toxicity are invited to test the substances, the results of which will be evaluated by us. The predictivity of the in vitro results for both partial and gross human toxicity data will be determined with combined use of univariate regression analysis and soft multivariate modeling. The predictivity of the in vitro results will be compared with the predictivity of conventional animal tests for the same chemicals. Finally, batteries of tests with optimal prediction power for various types of human toxicity will be selected. The need for and possible uses of such batteries are discussed.  相似文献   

7.
In its White Paper, Strategy for a Future Chemicals Policy, published in 2001, the European Commission (EC) proposed the REACH (Registration, Evaluation and Authorisation of CHemicals) system to deal with both existing and new chemical substances. This system is based on a top-down approach to toxicity testing, in which the degree of toxicity information required is dictated primarily by production volume (tonnage). If testing is to be based on traditional methods, very large numbers of laboratory animals could be needed in response to the REACH system, causing ethical, scientific and logistical problems that would be incompatible with the time-schedule envisaged for testing. The EC has emphasised the need to minimise animal use, but has failed to produce a comprehensive strategy for doing so. The present document provides an overall scheme for predictive toxicity testing, whereby the non-animal methods identified and discussed in a recent and comprehensive ECVAM document, could be used in a tiered approach to provide a rapid and scientifically justified basis for the risk assessment of chemicals for their toxic effects in humans. The scheme starts with a preliminary risk assessment process (involving available information on hazard and exposure), followed by testing, based on physicochemical properties and (Q)SAR approaches. (Q)SAR analyses are used in conjunction with expert system and biokinetic modelling, and information on metabolism and identification of the principal metabolites in humans. The resulting information is then combined with production levels and patterns of use to assess potential human exposure. The nature and extent of any further testing should be based strictly on the need to fill essential information gaps in order to generate adequate risk assessments, and should rely on non-animal methods, as far as possible. The scheme also includes a feedback loop, so that new information is used to improve the predictivity of computational expert systems. Several recommendations are made, the most important of which is that the European Union (EU) should actively promote the improvement and validation of (Q)SAR models and expert systems, and computer-based methods for biokinetic modelling, since these offer the most realistic and most economical solution to the need to test large numbers of chemicals.  相似文献   

8.
In its White Paper, "Strategy for a Future Chemicals Policy," published in 2001, the European Commission (EC) proposed the REACH (Registration, Evaluation and Authorisation of CHemicals) system to deal with both existing and new chemical substances. This system is based on a top-down approach to toxicity testing, in which the degree of toxicity information required is dictated primarily by production volume (tonnage). If testing is to be based on traditional methods, very large numbers of laboratory animals could be needed in response to the REACH system, causing ethical, scientific and logistical problems that would be incompatible with the time-schedule envisaged for testing. The EC has emphasised the need to minimise animal use, but has failed to produce a comprehensive strategy for doing so. The present document provides an overall scheme for predictive toxicity testing, whereby the non-animal methods identified and discussed in a recent and comprehensive ECVAM document, could be used in a tiered approach to provide a rapid and scientifically justified basis for the risk assessment of chemicals for their toxic effects in humans. The scheme starts with a preliminary risk assessment process (involving available information on hazard and exposure), followed by testing, based on physicochemical properties and (Q)SAR approaches. (Q)SAR analyses are used in conjunction with expert system and biokinetic modelling, and information on metabolism and identification of the principal metabolites in humans. The resulting information is then combined with production levels and patterns of use to assess potential human exposure. The nature and extent of any further testing should be based strictly on the need to fill essential information gaps in order to generate adequate risk assessments, and should rely on non-animal methods, as far as possible. The scheme also includes a feedback loop, so that new information is used to improve the predictivity of computational expert systems. Several recommendations are made, the most important of which is that the European Union (EU) should actively promote the improvement and validation of (Q)SAR models and expert systems, and computer-based methods for biokinetic modelling, since these offer the most realistic and most economical solution to the need to test large numbers of chemicals.  相似文献   

9.
Integrated testing strategies have been proposed to facilitate the process of chemicals risk assessment to fulfil the requirements of the proposed EU REACH system. Here, we present individual, decision-tree style, strategies for the eleven major toxicity endpoints of the REACH system, including human health effects and ecotoxicity. These strategies make maximum use of non-animal approaches to hazard identification, before resorting to traditional animal test methods. Each scheme: a) comprises a mixture of validated and non-validated assays (distinguished in the schemes); and b) decision points at key stages to allow the cessation of further testing, should it be possible to use the available information to classify and label and/or undertake risk assessment. The rationale and scientific justification for each of the schemes, with respect to the validation status of the tests involved and their individual advantages and limitations, will be discussed in detail in a series of future publications.  相似文献   

10.
C Ramel 《Mutation research》1986,168(3):327-342
The deployment of short-term assays for the detection of carcinogens inevitably has to be based on the genetic alterations actually involved in carcinogenesis. This paper gives an overview of oncogene activation and other mutagenic events connected with cancer induction. It is emphasized that there are indications of DNA alterations in carcinogenicity, which are not in accordance with "conventional" mutations and mutation frequencies, as measured by short-term assays of point mutations, chromosome aberrations and numerical chromosome changes. This discrepancy between DNA alterations in carcinogenicity and the endpoints of short-term assays in current use include transpositions, insertion mutations, polygene mutations, gene amplifications and DNA methylations. Furthermore, tumourigenicity may imply an induction of a genetic instability, followed by a cascade of genetic alterations. The evaluation of short-term assays for carcinogenesis mostly involves two correlations that is, between mutation and animal cancer data on the one hand and between animal cancer data and human carcinogenicity on the other. It should be stressed that animal bioassays for cancer in general imply tests specifically for the property of chemicals to function as complete carcinogens, which may be a rather poor reflection of the actual situation in human populations. The primary aim of short-term mutagenicity assays is to provide evidence as to whether a compound can be expected to cause mutations in humans, and such evidence has to be considered seriously even against a background of negative cancer data. For the evaluation of data from short-term assays the massive amount of empirical data from different assays should be used and new computer systems in that direction can be expected to provide improved predictions of carcinogenicity.  相似文献   

11.
111 chemicals of known rodent carcinogenicity (49 carcinogens, 62 noncarcinogens), including many promoters of carcinogenesis, nongenotoxic carcinogens, hepatocarcinogens, and halogenated hydrocarbons, were selected for study. The chemicals were administered by gavage in two dose levels to female Sprague-Dawley rats. The effects of these 111 chemicals on 4 biochemical assays (hepatic DNA damage by alkaline elution (DD), hepatic ornithine decarboxylase activity (ODC), serum alanine aminotransferase activity (ALT), and hepatic cytochrome P-450 content (P450)) were determined. Composite parameters are defined as follows: CP = [ODC and P450), CT = [ALT and ODC), and TS = [DD or CP or CT]. The operational characteristics of TS for predicting rodent cancer were sensitivity 55%, specificity 87%, positive predictivity 77%, negative predictivity 71%, and concordance 73%. For these chemicals, the 73% concordance of this study was superior to the concordance obtained from published data from other laboratories on the Ames test (53%), structural alerts (SA) (46%), chromosome aberrations in Chinese hamster ovary cells (ABS) (48%), cell mutation in mouse lymphoma 15178Y cells (MOLY) (52%), and sister-chromatid exchange in Chinese hamster ovary cells (SCE) (60%). The 4 in vivo biochemical assays were complementary to each other. The composite parameter TS also shows complementarity to all 5 other predictors of rodent cancer examined in this paper. For example, the Ames test alone has a concordance of only 53%. In combination with TS, the concordance is increased to 62% (Ames or TS) or to 63% (Ames and TS). For the 67 chemicals with data available for SA, the concordance for predicting rodent carcinogenicity was 47% (for SA alone), 54% (for SA or TS), and 66% (for SA and TS). These biochemical assays will be useful: (1) to predict rodent carcinogenicity per se, (2) to 'confirm' the results of short-term mutagenicity tests by the high specificity mode of the biochemical assays (the specificity and positive predictivity are both 100%), and (3) to be a component of future complementary batteries of tests for predicting rodent carcinogenicity.  相似文献   

12.
Exposure of the respiratory tract to airborne particles (including metal-dusts and nano-particles) is considered as a serious health hazard. For a wide range of substances basic knowledge about the toxic properties and the underlying pathomechanisms is lacking or even completely missing. Legislation demands the toxicological characterization of all chemicals placed on the market until 2018 (REACH). As toxicological in vivo data are rare with regard to acute lung toxicity or exhibit distinct limitations (e.g. inter-species differences) and legislation claims the reduction of animal experiments in general (“3R” principle), profound in vitro models have to be established and characterized to meet these requirements. In this paper we characterize a recently introduced advanced in vitro exposure system (Cultex® RFS) showing a great similarity to the physiological in vivo exposure situation for the assessment of acute pulmonary toxicity of airborne materials.  相似文献   

13.
Two year rodent bioassays play a key role in the assessment of carcinogenic potential of chemicals to humans. The seventh amendment to the European Cosmetics Directive will ban in 2013 the marketing of cosmetic and personal care products that contain ingredients that have been tested in animal models. Thus 2-year rodent bioassays will not be available for cosmetics/personal care products. Furthermore, for large testing programs like REACH, in vivo carcinogenicity testing is impractical. Alternative ways to carcinogenicity assessment are urgently required. In terms of standardization and validation, the most advanced in vitro tests for carcinogenicity are the cell transformation assays (CTAs). Although CTAs do not mimic the whole carcinogenesis process in vivo, they represent a valuable support in identifying transforming potential of chemicals. CTAs have been shown to detect genotoxic as well as non-genotoxic carcinogens and are helpful in the determination of thresholds for genotoxic and non-genotoxic carcinogens. The extensive review on CTAs by the OECD (OECD (2007) Environmental Health and Safety Publications, Series on Testing and Assessment, No. 31) and the proven within- and between-laboratories reproducibility of the SHE CTAs justifies broader use of these methods to assess carcinogenic potential of chemicals.  相似文献   

14.
Tonnage-based information requirements are specified in the proposal on the regulation on the Registration, Evaluation and Authorisation of Chemicals (REACH) in the European Union. The hazard assessment for toxic endpoints should be performed by using a tiered approach, i.e. as an information strategy (IS), starting with an evaluation of all of the data already available, including animal in vivo and in vitro data, and human evidence and case reports, as well as data from (Quantitative)-Structure Activity Relationships ([Q]SARs) or read-across, before any further testing is suggested. To contribute to the implementation of the REACH system, the Nordic countries launched two projects: 1) a review of currently used testing strategies, including a comparison with the REACH requirements; and 2) the development of detailed ISs for skin and eye irritation/corrosion. The review showed that the ISs and classification criteria for the selected endpoints are inconsistent in many cases. In the classification criteria, human data and in vivo test results are usually the prerequisites. Other types of information, such as data from in vitro studies, can sometimes be used, but usually as supportive evidence only. This differs from the REACH ISs, where QSARs, read-across and in vitro testing are important elements. In the other part of the project, an IS for skin and eye irritation/corrosion was proposed. The strategy was "tested" by using four high production volume (HPV) chemicals: hydrogen peroxide, methyl tertiary-butyl ether (MTBE), trivalent chromium, and diantimony trioxide, but only MTBE and trivalent chromium are dealt with in this paper. The "test" revealed that in vivo data, human case reports and physical-chemical data were available and could be used in the evaluation. Classification could be based on the proposed IS and the existing data in all cases, except for the eye irritation/corrosion of trivalent chromium. Weight-of-evidence analysis appeared to be a useful step in the ISs proposed, and including it in the REACH strategies should be considered. For these chemicals, few in vitro and (Q)SAR data were available--more of these data would be generated, if the relevant guidance and legislation on classification were updated.  相似文献   

15.
Phytotechnologies have potential to reduce the amount or toxicity of deleterious chemicals and agents, and thereby, can reduce human exposures to hazardous substances. As such, phytotechnologies are tools for primary prevention in public health. Recent research demonstrates phytotechnologies can be uniquely tailored for effective exposure prevention in a variety of applications. In addition to exposure prevention, plants can be used as sensors to identify environmental contamination and potential exposures. In this paper, we have presented applications and research developments in a framework to illustrate how phytotechnologies can meet basic public health needs for access to clean water, air, and food. Because communities can often integrate plant-based technologies at minimal cost and with low infrastructure needs, the use of these technologies can be applied broadly to minimize potential contaminant exposure and improve environmental quality. These natural treatment systems also provide valuable ecosystem services to communities and society. In the future, integrating and coordinating phytotechnology activities with public health research will allow technology development focused on prevention of environmental exposures to toxic compounds. Hence, phytotechnologies may provide sustainable solutions to environmental exposure challenges, improving public health and potentially reducing the burden of disease.  相似文献   

16.
Chemical carcinogenicity has been the target of a large array of attempts to create alternative predictive models, ranging from short-term biological assays (e.g. mutagenicity tests) to theoretical models. Among the theoretical models, the application of the science of structure-activity relationships (SAR) has earned special prominence. A crucial element is the independent evaluation of the predictive ability. In the past decade, there have been two fundamental comparative exercises on the prediction of chemical carcinogenicity, held under the aegis to the US National Toxicology Program (NTP). In both exercises, the predictions were published before the animal data were known, thus using a most stringent criterion of predictivity. We analyzed the results of the first comparative exercise in a previous paper [Mutat. Res. 387 (1997) 35]; here, we present the complete results of the second exercise, and we analyze and compare the prediction sets. The range of accuracy values was quite large: the systems that performed best in this prediction exercise were in the range 60-65% accuracy. They included various human experts approaches (e.g. Oncologic) and biologically based approaches (e.g. the experimental transformation assay in Syrian hamster embryo (SHE) cells). The main difficulty for the structure-activity relationship-based approaches was the discrimination between real carcinogens, and non-carcinogens containing structural alerts (SA) for genotoxic carcinogenicity. It is shown that the use of quantitative structure-activity relationship models, when possible, can contribute to overcome the above problem. Overall, given the uncertainty linked to the predictions, the predictions for the individual chemicals cannot be taken at face value; however, the general level of knowledge available today (especially for genotoxic carcinogens) allows qualified human experts to operate a very efficient priority setting of large sets of chemicals.  相似文献   

17.
Rosenkranz HS 《Mutation research》2003,529(1-2):117-127
The health risk manager and policy analyst must frequently make recommendations based upon incomplete toxicity data. This is a situation which is encountered in the evaluation of human carcinogenic risks as animal cancer bioassay results are often not available. In this study, in order to assess the relevance of other possible indicators of carcinogenic risks, we used the "chemical diversity approach" to estimate the magnitude of the human carcinogenic risk based upon Salmonella mutagenicity and systemic toxicity data of the "universe of chemicals" to which humans have the potential to be exposed. Analyses of the properties of 10,000 agents representative of the "universe of chemicals" suggest that chemicals that have genotoxic potentials as well as exhibiting greater systemic toxicity are more likely to be carcinogens than non-genotoxicants or agents that exhibit lesser toxicity. Since "genotoxic" carcinogenicity is a hallmark of recognized human carcinogens, these findings are relevant to human cancer risk assessment.  相似文献   

18.
In order to establish safe exposure levels for toxic chemicals, risk assessment guidelines have been developed. A compilation is given by the author on the elements of risk assessment of hazardous neurotoxic pesticides, using data obtained from human epidemiological studies, from animal experiments, from the international literature and from the author's own experiments as well. Well-controlled laboratory studies of neurotoxicity have the potential to provide adequate exposure and effect data for accurate hazard identification. Animal models of neurotoxicity as highly sensitive behavioral and neurophysiological methods as a function of doses, provide data for human low dose extrapolation by using mathematical models. This procedure might be the basis for reducing risk ("risk management"), therefore some examples are given, how to handle properly neurotoxic pesticides with different- high or low-risk.  相似文献   

19.
The need for comprehensive and reliable risk management of chemicals requires appropriate information, data integration, and sharing, as suggested in Europe by Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) regulation. REACH requires more sharing of responsibilities among authorities, chemical manufacturers, importers, and users to manage the risks that chemicals can pose to human health and the environment throughout their life cycle. This has obviously enlarged the audience of people who could be interested in gathering this information. A major bottleneck is that information sources on chemicals are frequently sparsely distributed, collected, and managed by different institutions, with different aims, resulting in several practical problems. This article describes the conceptual design and implementation of a free online access database (DESC) as an integrated information system on chemical substances in compliance with the REACH regulation. An interdisciplinary approach was applied by involving several experts from different disciplines (ecotoxicologists, chemists, information technology specialists, regulators). DESC contains relevant environmental and toxicological data (physico-chemical and ecotoxicological data, inclusion in priority lists, current classification and labeling, etc.) on more than 651 chemicals, which can be easily consulted by people with different degrees of expertise interested in knowing the risk from exposure to chemicals and their safe use.  相似文献   

20.
The assumption that animal models are reasonably predictive of human outcomes provides the basis for their widespread use in toxicity testing and in biomedical research aimed at developing cures for human diseases. To investigate the validity of this assumption, the comprehensive Scopus biomedical bibliographic databases were searched for published systematic reviews of the human clinical or toxicological utility of animal experiments. In 20 reviews in which clinical utility was examined, the authors concluded that animal models were either significantly useful in contributing to the development of clinical interventions, or were substantially consistent with clinical outcomes, in only two cases, one of which was contentious. These included reviews of the clinical utility of experiments expected by ethics committees to lead to medical advances, of highly-cited experiments published in major journals, and of chimpanzee experiments--those involving the species considered most likely to be predictive of human outcomes. Seven additional reviews failed to clearly demonstrate utility in predicting human toxicological outcomes, such as carcinogenicity and teratogenicity. Consequently, animal data may not generally be assumed to be substantially useful for these purposes. Possible causes include interspecies differences, the distortion of outcomes arising from experimental environments and protocols, and the poor methodological quality of many animal experiments, which was evident in at least 11 reviews. No reviews existed in which the majority of animal experiments were of good methodological quality. Whilst the effects of some of these problems might be minimised with concerted effort (given their widespread prevalence), the limitations resulting from interspecies differences are likely to be technically and theoretically impossible to overcome. Non-animal models are generally required to pass formal scientific validation prior to their regulatory acceptance. In contrast, animal models are simply assumed to be predictive of human outcomes. These results demonstrate the invalidity of such assumptions. The consistent application of formal validation studies to all test models is clearly warranted, regardless of their animal, non-animal, historical, contemporary or possible future status. Likely benefits would include, the greater selection of models truly predictive of human outcomes, increased safety of people exposed to chemicals that have passed toxicity tests, increased efficiency during the development of human pharmaceuticals and other therapeutic interventions, and decreased wastage of animal, personnel and financial resources. The poor human clinical and toxicological utility of most animal models for which data exists, in conjunction with their generally substantial animal welfare and economic costs, justify a ban on animal models lacking scientific data clearly establishing their human predictivity or utility.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号