首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
The 2001 European Commission proposal for the Registration, Evaluation and Authorisation of Chemicals (REACH) aims to improve public and environmental health by assessing the toxicity of, and restricting exposure to, potentially toxic chemicals. The greatest benefits are expected to accrue from decreased cancer incidences. Hence the accurate identification of chemical carcinogens must be a top priority for the REACH system. Due to a paucity of human clinical data, the identification of potential human carcinogens has conventionally relied on animal tests. However, our survey of the US Environmental Protection Agency's (EPAs) toxic chemicals database revealed that, for a majority of the chemicals of greatest public health concern (93/160, i.e. 58.1%), the EPA found animal carcinogenicity data to be inadequate to support classifications of probable human carcinogen or non-carcinogen. A wide variety of species were used, with rodents predominating; a wide variety of routes of administration were used; and a particularly wide variety of organ systems were affected. These factors raise serious biological obstacles that render accurate extrapolation to humans profoundly difficult. Furthermore, significantly different International Agency for Research on Cancer assessments of identical chemicals, indicate that the true human predictivity of animal carcinogenicity data is even poorer than is indicated by the EPA figures alone. Consequently, we propose the replacement of animal carcinogenicity bioassays with a tiered combination of non-animal assays, which can be expected to yield a weight-of-evidence characterisation of carcinogenic risk with superior human predictivity. Additional advantages include substantial savings of financial, human and animal resources, and potentially greater insights into mechanisms of carcinogenicity.  相似文献   

2.
C Ramel 《Mutation research》1986,168(3):327-342
The deployment of short-term assays for the detection of carcinogens inevitably has to be based on the genetic alterations actually involved in carcinogenesis. This paper gives an overview of oncogene activation and other mutagenic events connected with cancer induction. It is emphasized that there are indications of DNA alterations in carcinogenicity, which are not in accordance with "conventional" mutations and mutation frequencies, as measured by short-term assays of point mutations, chromosome aberrations and numerical chromosome changes. This discrepancy between DNA alterations in carcinogenicity and the endpoints of short-term assays in current use include transpositions, insertion mutations, polygene mutations, gene amplifications and DNA methylations. Furthermore, tumourigenicity may imply an induction of a genetic instability, followed by a cascade of genetic alterations. The evaluation of short-term assays for carcinogenesis mostly involves two correlations that is, between mutation and animal cancer data on the one hand and between animal cancer data and human carcinogenicity on the other. It should be stressed that animal bioassays for cancer in general imply tests specifically for the property of chemicals to function as complete carcinogens, which may be a rather poor reflection of the actual situation in human populations. The primary aim of short-term mutagenicity assays is to provide evidence as to whether a compound can be expected to cause mutations in humans, and such evidence has to be considered seriously even against a background of negative cancer data. For the evaluation of data from short-term assays the massive amount of empirical data from different assays should be used and new computer systems in that direction can be expected to provide improved predictions of carcinogenicity.  相似文献   

3.
Due to limited human exposure data, risk classification and the consequent regulation of exposure to potential carcinogens has conventionally relied mainly upon animal tests. However, several investigations have revealed animal carcinogenicity data to be lacking in human predictivity. To investigate the reasons for this, we surveyed 160 chemicals possessing animal but not human exposure data within the US Environmental Protection Agency chemicals database, but which had received human carcinogenicity assessments by 1 January 2004. We discovered the use of a wide variety of species, with rodents predominating, and of a wide variety of routes of administration, and that there were effects on a particularly wide variety of organ systems. The likely causes of the poor human predictivity of rodent carcinogenicity bioassays include: 1) the profound discordance of bioassay results between rodent species, strains and genders, and further, between rodents and human beings; 2) the variable, yet substantial, stresses caused by handling and restraint, and the stressful routes of administration common to carcinogenicity bioassays, and their effects on hormonal regulation, immune status and predisposition to carcinogenesis; 3) differences in rates of absorption and transport mechanisms between test routes of administration and other important human routes of exposure; 4) the considerable variability of organ systems in response to carcinogenic insults, both between and within species; and 5) the predisposition of chronic high dose bioassays toward false positive results, due to the overwhelming of physiological defences, and the unnatural elevation of cell division rates during ad libitum feeding studies. Such factors render profoundly difficult any attempts to accurately extrapolate human carcinogenic hazards from animal data.  相似文献   

4.
The ability of plant genotoxicity assays to predict carcinogenicity   总被引:3,自引:0,他引:3  
A number of assays have been developed which use higher plants for measuring mutagenic or cytogenetic effects of chemicals, as an indication of carcinogenicity. Plant assays require less extensive equipment, materials and personnel than most other genotoxicity tests, which is a potential advantage, particularly in less developed parts of the world. We have analyzed data on 9 plant genotoxicity assays evaluated by the Gene-Tox program of the U.S. Environmental Protection Agency, using methodologies we have recently developed to assess the capability of assays to predict carcinogenicity and carcinogenic potency. All 9 of the plant assays appear to have high sensitivity (few false negatives). Specificity (rate of true negatives) was more difficult to evaluate because of limited testing on non-carcinogens; however, available data indicate that only the Arabidopsis mutagenicity (ArM) test appears to have high specificity. Based upon their high sensitivity, plant genotoxicity tests are most appropriate for a risk-averse testing program, because although many false positives will be generated, the relatively few negative results will be quite reliable.  相似文献   

5.
For a number of years, investigators have recognized that humans potentially are exposed to large numbers of genotoxicants. Many efforts have attempted to validate various short-term bioassays for use as rapid, inexpensive screens for genotoxicants--especially carcinogens. In this analysis, we examine Salmonella mutagenicity as an indicator of potential carcinogenicity by comparing published (and when possible, evaluated) Salmonella results with the evaluated Gene-Tox animal carcinogen data base. The Salmonella bioassay does especially well in those cases where the level of evidence for carcinogenicity is the strongest. Analysis shows that except for specific classes of compounds, the plate-incorporation protocol and the preincubation protocol are equally efficient at detecting mutagens. This paper also demonstrates how validation values (sensitivity, specificity, etc.) vary with chemical class. Overall, this analysis demonstrates that when used and interpreted in a meaningful chemical class context, the Salmonella bioassay remains extremely useful in identifying potential animal carcinogens.  相似文献   

6.
The assumption that animal models are reasonably predictive of human outcomes provides the basis for their widespread use in toxicity testing and in biomedical research aimed at developing cures for human diseases. To investigate the validity of this assumption, the comprehensive Scopus biomedical bibliographic databases were searched for published systematic reviews of the human clinical or toxicological utility of animal experiments. In 20 reviews in which clinical utility was examined, the authors concluded that animal models were either significantly useful in contributing to the development of clinical interventions, or were substantially consistent with clinical outcomes, in only two cases, one of which was contentious. These included reviews of the clinical utility of experiments expected by ethics committees to lead to medical advances, of highly-cited experiments published in major journals, and of chimpanzee experiments--those involving the species considered most likely to be predictive of human outcomes. Seven additional reviews failed to clearly demonstrate utility in predicting human toxicological outcomes, such as carcinogenicity and teratogenicity. Consequently, animal data may not generally be assumed to be substantially useful for these purposes. Possible causes include interspecies differences, the distortion of outcomes arising from experimental environments and protocols, and the poor methodological quality of many animal experiments, which was evident in at least 11 reviews. No reviews existed in which the majority of animal experiments were of good methodological quality. Whilst the effects of some of these problems might be minimised with concerted effort (given their widespread prevalence), the limitations resulting from interspecies differences are likely to be technically and theoretically impossible to overcome. Non-animal models are generally required to pass formal scientific validation prior to their regulatory acceptance. In contrast, animal models are simply assumed to be predictive of human outcomes. These results demonstrate the invalidity of such assumptions. The consistent application of formal validation studies to all test models is clearly warranted, regardless of their animal, non-animal, historical, contemporary or possible future status. Likely benefits would include, the greater selection of models truly predictive of human outcomes, increased safety of people exposed to chemicals that have passed toxicity tests, increased efficiency during the development of human pharmaceuticals and other therapeutic interventions, and decreased wastage of animal, personnel and financial resources. The poor human clinical and toxicological utility of most animal models for which data exists, in conjunction with their generally substantial animal welfare and economic costs, justify a ban on animal models lacking scientific data clearly establishing their human predictivity or utility.  相似文献   

7.
Two year rodent bioassays play a key role in the assessment of carcinogenic potential of chemicals to humans. The seventh amendment to the European Cosmetics Directive will ban in 2013 the marketing of cosmetic and personal care products that contain ingredients that have been tested in animal models. Thus 2-year rodent bioassays will not be available for cosmetics/personal care products. Furthermore, for large testing programs like REACH, in vivo carcinogenicity testing is impractical. Alternative ways to carcinogenicity assessment are urgently required. In terms of standardization and validation, the most advanced in vitro tests for carcinogenicity are the cell transformation assays (CTAs). Although CTAs do not mimic the whole carcinogenesis process in vivo, they represent a valuable support in identifying transforming potential of chemicals. CTAs have been shown to detect genotoxic as well as non-genotoxic carcinogens and are helpful in the determination of thresholds for genotoxic and non-genotoxic carcinogens. The extensive review on CTAs by the OECD (OECD (2007) Environmental Health and Safety Publications, Series on Testing and Assessment, No. 31) and the proven within- and between-laboratories reproducibility of the SHE CTAs justifies broader use of these methods to assess carcinogenic potential of chemicals.  相似文献   

8.
Cluster analysis can be a useful tool for exploratory data analysis to uncover natural groupings in data, and initiate new ideas and hypotheses about such groupings. When applied to short-term assay results, it provides and improves estimates for the sensitivity and specificity of assays, provides indications of association between assays and, in turn, which assays can be substituted for one another in a battery, and allows a data base containing test results on chemicals of unknown carcinogenicity to be linked to a data base for which animal carcinogenicity data are available. Cluster analysis was applied to the Gene-Tox data base (which contains short-term test results on chemicals of both known and unknown carcinogenicity). The results on chemicals of known carcinogenicity were different from those obtained when the entire data base was analyzed. This suggests that the associations (and possibly the sensitivities and specificities) which are based on chemicals of known carcinogenicity may not be representative of the true measures. Cluster analysis applied to the total data base should be useful in improving these estimates. Many of the associations between the assays which were found through the use of cluster analysis could be 'validated' based on previous knowledge of the mechanistic basis of the various tests, but some of the associations were unsuspected. These associations may be a reflection of a non-ideal data base. As additional data becomes available and new clustering techniques for handling non-ideal data bases are developed, results from such analyses could play an increasing role in strengthening prediction schemes which utilize short-term tests results to screen chemicals for carcinogenicity, such as the carcinogenicity and battery selection (CPBS) method (Chankong et al., 1985).  相似文献   

9.
The prudent assumption that carcinogen bioassays in rodents predict for human carcinogenicity is examined. It is suggested that in certain cases, as for example the induction of tumors against a high incidence in controls, or in situations in which high dose toxicity may be a critical factor in the induction of cancer, the probability that animal bioassays predict for humans may be low. The term 'biological risk assessment' is introduced to describe that part of risk assessment concerned with the relevance of specific animal results to the induction of human cancer. Biological risk assessment, which is almost entirely dependent on an understanding of carcinogenesis mechanisms, is an important addition to present mathematical modeling used to predict the effects of animal carcinogens that have been demonstrated after high dose exposure, to the effects of the much smaller doses to which humans are perceived to be exposed. Evidence for the conclusions reached by biological risk assessment may sometimes be supported by a careful review of human epidemiological data.  相似文献   

10.
The regulation of human exposure to potentially carcinogenic chemicals constitutes society's most important use of animal carcinogenicity data. Environmental contaminants of greatest concern within the USA are listed in the Environmental Protection Agency's (EPA's) Integrated Risk Information System (IRIS) chemicals database. However, of the 160 IRIS chemicals lacking even limited human exposure data but possessing animal data that had received a human carcinogenicity assessment by 1 January 2004, we found that in most cases (58.1%; 93/160), the EPA considered animal carcinogenicity data inadequate to support a classification of probable human carcinogen or non-carcinogen. For the 128 chemicals with human or animal data also assessed by the World Health Organisation's International Agency for Research on Cancer (IARC), human carcinogenicity classifications were compatible with EPA classifications only for those 17 having at least limited human data (p = 0.5896). For those 111 primarily reliant on animal data, the EPA was much more likely than the IARC to assign carcinogenicity classifications indicative of greater human risk (p < 0.0001). The IARC is a leading international authority on carcinogenicity assessments, and its significantly different human carcinogenicity classifications of identical chemicals indicate that: 1) in the absence of significant human data, the EPA is over-reliant on animal carcinogenicity data; 2) as a result, the EPA tends to over-predict carcinogenic risk; and 3) the true predictivity for human carcinogenicity of animal data is even poorer than is indicated by EPA figures alone. The EPA policy of erroneously assuming that tumours in animals are indicative of human carcinogenicity is implicated as a primary cause of these errors.  相似文献   

11.
Replacing animal procedures with methods such as cells and tissues in vitro, volunteer studies, physicochemical techniques and computer modelling, is driven by legislative, scientific and moral imperatives. Non-animal approaches are now considered as advanced methods that can overcome many of the limitations of animal experiments. In testing medicines and chemicals, in vitro assays have spared hundreds of thousands of animals. In contrast, academic animal use continues to rise and the concept of replacement seems less well accepted in university research. Even so, some animal procedures have been replaced in neurological, reproductive and dentistry research and progress is being made in fields such as respiratory illnesses, pain and sepsis. Systematic reviews of the transferability of animal data to the clinical setting may encourage a fresh look for novel non-animal methods and, as mainstream funding becomes available, more advances in replacement are expected.  相似文献   

12.
In a series of papers, Ames and colleagues allege that the scientific and public health communities have perpetuated a series of 'misconceptions' that resulted in inaccurate identification of chemicals that pose potential human cancer risks, and misguided cancer prevention strategies and regulatory policies. They conclude that exposures to industrial and synthetic chemicals represent negligible cancer risks and that animal studies have little or no scientific value for assessing human risks. Their conclusions are based on flawed and untested assumptions. For instance, they claim that synthetic residues on food can be ignored because 99.99% of pesticides humans eat are natural, chemicals in plants are pesticides, and their potential to cause cancer equals that of synthetic pesticides. Similarly, Ames does not offer any convincing scientific evidence to justify discrediting bioassays for identifying human carcinogens. Ironically, their arguments center on a ranking procedure that relies on the same experimental data and extrapolation methods they criticize as being unreliable for evaluating cancer risks. We address their inconsistencies and flaws, and present scientific facts and our perspectives surrounding Ames' nine alleged misconceptions. Our conclusions agree with the International Agency for Research on Cancer, the National Toxicology Program, and other respected scientific organizations: in the absence of human data, animal studies are the most definitive for assessing human cancer risks. Animal data should not be ignored, and precautions should be taken to lessen human exposures. Dismissing animal carcinogenicity findings would lead to human cancer cases as the only means of demonstrating carcinogenicity of environmental agents. This is unacceptable public health policy.  相似文献   

13.
Bacterial and cell culture genotoxicity assays have proven to be valuable in the identification of DNA reactive carcinogens because mutational events that alter the activity or expression of growth control genes are a key step in carcinogenesis. The addition of metabolizing enzymes to these assays have expanded the ability to identify agents that require metabolic activation. However, chemical carcinogenesis is a complex process dependent on toxicokinetics and involving at least steps of initiation, promotion and progression. Identification of those carcinogens that are activated in a manner unique to the whole animal, such as 2,6-dinitrotoluene, require in vivo genotoxicity assays. There are many different classes of non-DNA reactive carcinogens ranging from the potent promoter 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) that acts through a specific receptor, to compounds that alter growth control, such as phenobarbital. Many compounds, such as saccharin, appear to exhibit initiating, promotional and/or carcinogenic activity as events secondary to induced cytotoxicity and cell proliferation seen only at the chronic lifetime maximum tolerated doses mandated in rodent bioassays. Simple plus/minus vs. carcinogen/noncarcinogen comparisons used to validate the predictivity of bacterial and cell culture genotoxicity assays have revealed that a more comprehensive analysis will be required to account for the carcinogenicity of so many diverse chemical agents. Predictive assays and risk assessments for the numerous types of nongenotoxic carcinogens will require understanding of their mechanism of action, reasons for target organ and species specificity, and the quantitative dose-response relationships between endpoints such as induced cell proliferation and carcinogenic potential.  相似文献   

14.
J Ashby 《Mutation research》1983,115(2):177-213
Some of the probable reasons underlying the observation that not all chemicals shown to be genotoxic in vitro are capable of eliciting tumours in rodents or humans are discussed using appropriate examples. It is suggested that a substantial proportion of the resources currently available for conducting rodent carcinogenicity bioassays should be employed in the short-term evaluation in vivo of some of the many hundreds of chemicals recently defined as genotoxic in vitro, rather than in the protracted evaluation of a few chemicals, often of unknown activity in vitro, for carcinogenicity. A decision tree approach to the evaluation of chemicals for human mutagenic/carcinogenic potential is presented which is at variance with the construction and philosophy of many of the current legislative guidelines. The immediate need for the adoption of one of the available short-term in vivo liver assays, and/or the development of a short-term in vivo rodent assay capable of concomitantly monitoring different genetic end-points in a range of organs or tissues is emphasized.  相似文献   

15.
One of the consequences of the low specificity of the in vitro mammalian cell genotoxicity assays reported in our previous paper [D. Kirkland, M. Aardema, L. Henderson, L. Muller, Evaluation of the ability of a battery of three in vitro genotoxicity tests to discriminate rodent carcinogens and non-carcinogens. I. Sensitivity, specificity and relative predictivity, Mutat. Res. 584 (2005) 1-256] is industry and regulatory agencies dealing with a large number of false-positive results during the safety assessment of new chemicals and drugs. Addressing positive results from in vitro genotoxicity assays to determine which are "false" requires extensive resources, including the conduct of additional animal studies. In order to reduce animal usage, and to conserve industry and regulatory agency resources, we thought it was important to raise the question as to whether the protocol requirements for a valid in vitro assay or the criteria for a positive result could be changed in order to increase specificity without a significant loss in sensitivity of these tests. We therefore analysed some results of the mouse lymphoma assay (MLA) and the chromosomal aberration (CA) test obtained for rodent carcinogens and non-carcinogens in more detail. For a number of chemicals that are positive only in either of these mammalian cell tests (i.e. negative in the Ames test) there was no correlation between rodent carcinogenicity and level of toxicity (we could not analyse this for the CA test as insufficient data were available in publications), magnitude of response or lowest effective positive concentration. On the basis of very limited in vitro and in vivo data, we could also find no correlation between the above parameters and formation of DNA adducts. Therefore, a change to the current criteria for required level of toxicity in the MLA, to limit positive calls to certain magnitudes of response, or to certain concentration ranges would not improve the specificity of the tests without significantly reducing the sensitivity. We also investigated a possible correlation between tumour profile (trans-species, trans-sex and multi-site versus single-species, single-sex and single-site) and pattern of genotoxicity results. Carcinogens showing the combination of trans-species, trans-sex and multi-site tumour profile were much more prevalent (70% more) in the group of chemicals giving positive results in all three in vitro assays than amongst those giving all negative results. However, single-species, single-sex, single-site carcinogens were not very prevalent even amongst those chemicals giving three negative results in vitro. Surprisingly, when mixed positive and negative results were compared, multi-site carcinogens were highly prevalent amongst chemicals giving only a single positive result in the battery of three in vitro tests. Finally we extended our relative predictivity (RP) calculations to combinations of positive and negative results in the genotoxicity battery. For two out of three tests positive, the RP for carcinogenicity was no higher than 1.0 and for 2/3 tests negative the RP for non-carcinogenicity was either zero (for Ames+MLA+MN) or 1.7 (for Ames+MLA+CA). Thus, all values were less than a meaningful RP of two, and indicate that it is not possible to predict outcome of the rodent carcinogenicity study when only 2/3 genotoxicity results are in agreement.  相似文献   

16.
Carcinogenicity is one of the toxicological endpoints causing the highest concern. Also, the standard bioassays in rodents used to assess the carcinogenic potential of chemicals and drugs are extremely long, costly and require the sacrifice of large numbers of animals. For these reasons, we have attempted development of a global quantitative structure-activity relationship (QSAR) model using a data set of 1464 compounds (the Galvez data set available from http://www.uv.es/-galvez/tablevi.pdf), including many marketed drugs for their carcinogenesis potential. Though experimental toxicity testing using animal models is unavoidable for new drug candidates at an advanced stage of drug development, yet the developed global QSAR model can in silico predict the carcinogenicity of new drug compounds to provide a tool for initial screening of new drug candidate molecules with reduced number of animal testing, money and time. Considering large number of data points with diverse structural features used for model development (n(training) = 732) and model validation (n(test) = 732), the model developed in this study has an encouraging statistical quality (leave-one-out Q2 = 0.731, R2pred = 0.716). Our developed model suggests that higher lipophilicity values and conjugated ring systems, thioketo and nitro groups contribute positively towards drug carcinogenicity. On the contrary, tertiary and secondary nitrogens, phenolic, enolic and carboxylic OH fragments and presence of three-membered rings reduce the carcinogenicity. Branching, size and shape are found to be crucial factors for drug-induced carcinogenicity. One may consider all these points to reduce carcinogenic potential of the molecules.  相似文献   

17.
18.
19.
The market for biotherapeutic monoclonal antibodies (mAbs) is large and is growing rapidly. However, attrition poses a significant challenge for the development of mAbs, and for biopharmaceuticals in general, with large associated costs in resource and animal use. Termination of candidate mAbs may occur due to poor translation from preclinical models to human safety. It is critical that the industry addresses this problem to maintain productivity. Though attrition poses a significant challenge for pharmaceuticals in general, there are specific challenges related to the development of antibody-based products. Due to species specificity, non-human primates (NHP) are frequently the only pharmacologically relevant species for nonclinical safety and toxicology testing for the majority of antibody-based products, and therefore, as more mAbs are developed, increased NHP use is anticipated. The integration of new and emerging in vitro and in silico technologies, e.g., cell- and tissue-based approaches, systems pharmacology and modeling, have the potential to improve the human safety prediction and the therapeutic mAb development process, while reducing and refining animal use simultaneously. In 2014, to engage in open discussion about the challenges and opportunities for the future of mAb development, a workshop was held with over 60 regulators and experts in drug development, mechanistic toxicology and emerging technologies to discuss this issue. The workshop used industry case-studies to discuss the value of the in vivo studies and identify opportunities for in vitro technologies in human safety assessment. From these and continuing discussions it is clear that there are opportunities to improve safety assessment in mAb development using non-animal technologies, potentially reducing future attrition, and there is a shared desire to reduce animal use through minimised study design and reduced numbers of studies.  相似文献   

20.
Dunson DB  Haseman JK 《Biometrics》1999,55(3):965-970
We describe a method for modeling carcinogenicity from animal studies where the data consist of counts of the number of tumors present over time. The research is motivated by applications to transgenic rodent studies, which have emerged as an alternative to chronic bioassays for screening possible carcinogens. In transgenic mouse studies, the endpoint of interest is frequently skin papilloma, with weekly examinations determining how many papillomas each animal has at a particular point in time. It is assumed that each animal has two unobservable latent variables at each time point. The first indicates whether or not the tumors are in a multiplying state and the second is the potential number of additional tumors if the tumors are in a multiplying state. The product of these variables follows a zero-inflated Poisson distribution, and the EM algorithm can be used to maximize the observed-data pseudo-likelihood, based on the latent variables. A generalized estimating equations robust variance estimator adjusts for dependency among outcomes within individual animals. The method is applied to testing for a dose-related trend in both tumor incidence and multiplicity in carcinogenicity studies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号