首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The removal of carconogenic factors would be a most efficient measure to prevent cancer. As far as known chemicals are concerned, every effort is made to avert them, or at least to reduce the exposure to such compounds, but is necessary to detect unknown chemicals, especially those, drugs and foodstuffs for example, to which large populations are exposed. Giving suspected chemicals to laboratory animals is a standard carcinogenicity test. Studies of the carcinogenicity of unknown chemicals in animals are time consuming, expensive and cumbersome. This is why other means of establishing carcinogenicity are sought for. Several rapid tests are available to-day to select suspected carcinogens. These methods aim primarily at determining with chemicals--at the cell or tissue level--certain changes that would appear essential to trigger the carcinogenic process, such as somatic mutations. Studies are used on the mutagenicity of chemicals for bacteria of the Salmonella type, for yeast and cultured mammalian cells, together with the induction of recessive lethal mutations in Drosophila and of the unscheduled repair synthesis of DNA and the transformation of mammalian cells in vitro. Although there is an unequivocal correlation between the activity of chemicals in such tests and their carcinogenicity, discrepancies are found. Thus, the in vivo tests on laboratory animals remain the most reliable method to determine carcinogenicity. Whereas direct extrapolation of experimental data to human pathology is impossible, the experimental evidence of the carcinogenicity of any chemical should allow us to draw constructive conclusions. We shall never be able to reject drugs which produce the expected results and cannot be replaced by other drugs. But we can must the drugs whose beneficial effects are not exceptional and which can be replaced by other chemicals. As for the chemicals used in food additives and cosmetics, and recognized as carcinogenic in animals, they should be totally given up. Any decision made should be based on animal studies.  相似文献   

2.
Two year rodent bioassays play a key role in the assessment of carcinogenic potential of chemicals to humans. The seventh amendment to the European Cosmetics Directive will ban in 2013 the marketing of cosmetic and personal care products that contain ingredients that have been tested in animal models. Thus 2-year rodent bioassays will not be available for cosmetics/personal care products. Furthermore, for large testing programs like REACH, in vivo carcinogenicity testing is impractical. Alternative ways to carcinogenicity assessment are urgently required. In terms of standardization and validation, the most advanced in vitro tests for carcinogenicity are the cell transformation assays (CTAs). Although CTAs do not mimic the whole carcinogenesis process in vivo, they represent a valuable support in identifying transforming potential of chemicals. CTAs have been shown to detect genotoxic as well as non-genotoxic carcinogens and are helpful in the determination of thresholds for genotoxic and non-genotoxic carcinogens. The extensive review on CTAs by the OECD (OECD (2007) Environmental Health and Safety Publications, Series on Testing and Assessment, No. 31) and the proven within- and between-laboratories reproducibility of the SHE CTAs justifies broader use of these methods to assess carcinogenic potential of chemicals.  相似文献   

3.
Carcinogenicity is one of the toxicological endpoints causing the highest concern. Also, the standard bioassays in rodents used to assess the carcinogenic potential of chemicals and drugs are extremely long, costly and require the sacrifice of large numbers of animals. For these reasons, we have attempted development of a global quantitative structure-activity relationship (QSAR) model using a data set of 1464 compounds (the Galvez data set available from http://www.uv.es/-galvez/tablevi.pdf), including many marketed drugs for their carcinogenesis potential. Though experimental toxicity testing using animal models is unavoidable for new drug candidates at an advanced stage of drug development, yet the developed global QSAR model can in silico predict the carcinogenicity of new drug compounds to provide a tool for initial screening of new drug candidate molecules with reduced number of animal testing, money and time. Considering large number of data points with diverse structural features used for model development (n(training) = 732) and model validation (n(test) = 732), the model developed in this study has an encouraging statistical quality (leave-one-out Q2 = 0.731, R2pred = 0.716). Our developed model suggests that higher lipophilicity values and conjugated ring systems, thioketo and nitro groups contribute positively towards drug carcinogenicity. On the contrary, tertiary and secondary nitrogens, phenolic, enolic and carboxylic OH fragments and presence of three-membered rings reduce the carcinogenicity. Branching, size and shape are found to be crucial factors for drug-induced carcinogenicity. One may consider all these points to reduce carcinogenic potential of the molecules.  相似文献   

4.
At present no mammalian test system which meets the toxicological requirements is available for routine testing of mutagenicity. Therefore, emphasis should be laid primarily on basic research in this area and not on large-scale screening of possible mutagens with methods known to be inadequate in many respects, if mutagenicity is a major hazard to man, a view certainly not shared by all toxicologists.Furthermore, if carcinogenicity is based on a mutagenic event occurring in somatic cells, the well established tests for carcinogenicity would provide a better way for evaluating irreversible somatic mutations than the tests now suggested for mutagenicity testing.In the present situation a drastic reduction of the noxes men are exposed to would be the most reliable means of preventing a toxicological disaster. We are still in the situation of continuously performing “mass human experiments” and detecting hazards only after considerable harm has been done. Consequently, the goal must be neither to expose a considerable proportion of our population to environmental hazards nor to give drugs to thousands or even millions of healthy people for any reasons whatsoever, unless test systems are available which would allow effective prevention of disaster.  相似文献   

5.
Cluster analysis can be a useful tool for exploratory data analysis to uncover natural groupings in data, and initiate new ideas and hypotheses about such groupings. When applied to short-term assay results, it provides and improves estimates for the sensitivity and specificity of assays, provides indications of association between assays and, in turn, which assays can be substituted for one another in a battery, and allows a data base containing test results on chemicals of unknown carcinogenicity to be linked to a data base for which animal carcinogenicity data are available. Cluster analysis was applied to the Gene-Tox data base (which contains short-term test results on chemicals of both known and unknown carcinogenicity). The results on chemicals of known carcinogenicity were different from those obtained when the entire data base was analyzed. This suggests that the associations (and possibly the sensitivities and specificities) which are based on chemicals of known carcinogenicity may not be representative of the true measures. Cluster analysis applied to the total data base should be useful in improving these estimates. Many of the associations between the assays which were found through the use of cluster analysis could be 'validated' based on previous knowledge of the mechanistic basis of the various tests, but some of the associations were unsuspected. These associations may be a reflection of a non-ideal data base. As additional data becomes available and new clustering techniques for handling non-ideal data bases are developed, results from such analyses could play an increasing role in strengthening prediction schemes which utilize short-term tests results to screen chemicals for carcinogenicity, such as the carcinogenicity and battery selection (CPBS) method (Chankong et al., 1985).  相似文献   

6.
7.
The ability of plant genotoxicity assays to predict carcinogenicity   总被引:3,自引:0,他引:3  
A number of assays have been developed which use higher plants for measuring mutagenic or cytogenetic effects of chemicals, as an indication of carcinogenicity. Plant assays require less extensive equipment, materials and personnel than most other genotoxicity tests, which is a potential advantage, particularly in less developed parts of the world. We have analyzed data on 9 plant genotoxicity assays evaluated by the Gene-Tox program of the U.S. Environmental Protection Agency, using methodologies we have recently developed to assess the capability of assays to predict carcinogenicity and carcinogenic potency. All 9 of the plant assays appear to have high sensitivity (few false negatives). Specificity (rate of true negatives) was more difficult to evaluate because of limited testing on non-carcinogens; however, available data indicate that only the Arabidopsis mutagenicity (ArM) test appears to have high specificity. Based upon their high sensitivity, plant genotoxicity tests are most appropriate for a risk-averse testing program, because although many false positives will be generated, the relatively few negative results will be quite reliable.  相似文献   

8.
A method for classifying chemicals with respect to carcinogenic potential based on short-term test results is presented. The method utilizes the logistic regression model to translate results from short-term toxicity assays into predictions of the likelihood that a chemical will be carcinogenic if tested in a long-term bioassay. The proposed method differs from previous approaches in two ways. First, statistical confidence limits on probabilities of cancer rather than central estimates of those probabilities are used for classification. Second, the method does not classify all chemicals in a data base with respect to carcinogenic potential. Instead, it identifies chemicals with highest and lowest likelihood of testing positive for carcinogenicity in the bioassay. A subset of chemicals with intermediate likelihood of being positive remains unclassified, and will require further testing, perhaps in a long-term bioassay. Two data bases of binary short-term and long-term test results from the literature are used to illustrate and evaluate the proposed procedure. A cross-validation analysis of one of the data sets suggests that, for a sufficiently rich data base of chemicals, the development of a robust predictive system to replace the bioassay for some unknown chemicals is a realistic goal.  相似文献   

9.
There is a great deal of current interest in the use of commercial, automated programs for the prediction of mutagenicity and carcinogenicity based on chemical structure. However, the goal of accurate and reliable toxicity prediction for any chemical, based solely on structural information remains elusive. The toxicity prediction challenge is global in its objective, but limited in its solution, to within local domains of chemicals acting according to similar mechanisms of action in the biological system; to predict, we must be able to generalize based on chemical structure, but the biology fundamentally limits our ability to do so. Available commercial systems for mutagenicity and/or carcinogenicity prediction differ in their specifics, yet most fall in two major categories: (1) automated approaches that rely on the use of statistics for extracting correlations between structure and activity; and (2) knowledge-based expert systems that rely on a set of programmed rules distilled from available knowledge and human expert judgement. These two categories of approaches differ in the ways that they represent, process, and generalize chemical-biological activity information. An application of four commercial systems (TOPKAT, CASE/MULTI-CASE, DEREK, and OncoLogic) to mutagenicity and carcinogenicity prediction for a particular class of chemicals—the haloacetic acids (HAs)—is presented to highlight these differences. Some discussion is devoted to the issue of gauging the relative performance of commercial prediction systems, as well as to the role of prospective prediction exercises in this effort. And finally, an alternative approach that stops short of delivering a prediction to a user, involving structure-searching and data base exploration, is briefly considered.  相似文献   

10.
J Ashby 《Mutation research》1983,115(2):177-213
Some of the probable reasons underlying the observation that not all chemicals shown to be genotoxic in vitro are capable of eliciting tumours in rodents or humans are discussed using appropriate examples. It is suggested that a substantial proportion of the resources currently available for conducting rodent carcinogenicity bioassays should be employed in the short-term evaluation in vivo of some of the many hundreds of chemicals recently defined as genotoxic in vitro, rather than in the protracted evaluation of a few chemicals, often of unknown activity in vitro, for carcinogenicity. A decision tree approach to the evaluation of chemicals for human mutagenic/carcinogenic potential is presented which is at variance with the construction and philosophy of many of the current legislative guidelines. The immediate need for the adoption of one of the available short-term in vivo liver assays, and/or the development of a short-term in vivo rodent assay capable of concomitantly monitoring different genetic end-points in a range of organs or tissues is emphasized.  相似文献   

11.
Short-term testing--are we looking at wrong endpoints?   总被引:3,自引:0,他引:3  
C Ramel 《Mutation research》1988,205(1-4):13-24
Short-term testing has been performed and interpreted on the basis of correlation between these tests and animal carcinogenicity. This empirical approach has been the only feasible one, due to a lack of knowledge of the actual genetic endpoints of relevance in carcinogenicity. However, the rapidly growing information on genetic alterations actually involved in carcinogenicity and in particular activation of oncogenes, provides facts of basic importance for the strategy of short-term testing. The presently used sets of short-term tests focus on standard genetic endpoints, mainly point mutations and chromosomal aberrations. Little attention has been paid in that connection to other endpoints, which have been shown or suspected to play an important role in carcinogenicity. These endpoints include gene amplification, transpositions, hypomethylation, polygene mutations and recombinogenic effects. Furthermore, indirect effects, for instance via radical generation and an imbalance of the nucleotide pool, may be of great significance for the carcinogenic and cocarcinogenic effects of many chemicals. Modern genetic and molecular technology has opened entirely new prospects for identifying genetic alterations in tumours and in its turn these prospects should be taken advantage of in order to build up more sophisticated batteries of assays, adapted to the genetic endpoints actually demonstrated to be involved in cancer induction. Development of new assay systems in accordance with the elucidation of genetic alterations in carcinogenicity will probably constitute one of the most important areas in genetic toxicology in the future. From a regulatory point of view the prerequisite for a development in this direction will be a flexibility of the handling of questions concerning short-term testing also at a bureaucratic level.  相似文献   

12.
In its White Paper, "Strategy for a Future Chemicals Policy," published in 2001, the European Commission (EC) proposed the REACH (Registration, Evaluation and Authorisation of CHemicals) system to deal with both existing and new chemical substances. This system is based on a top-down approach to toxicity testing, in which the degree of toxicity information required is dictated primarily by production volume (tonnage). If testing is to be based on traditional methods, very large numbers of laboratory animals could be needed in response to the REACH system, causing ethical, scientific and logistical problems that would be incompatible with the time-schedule envisaged for testing. The EC has emphasised the need to minimise animal use, but has failed to produce a comprehensive strategy for doing so. The present document provides an overall scheme for predictive toxicity testing, whereby the non-animal methods identified and discussed in a recent and comprehensive ECVAM document, could be used in a tiered approach to provide a rapid and scientifically justified basis for the risk assessment of chemicals for their toxic effects in humans. The scheme starts with a preliminary risk assessment process (involving available information on hazard and exposure), followed by testing, based on physicochemical properties and (Q)SAR approaches. (Q)SAR analyses are used in conjunction with expert system and biokinetic modelling, and information on metabolism and identification of the principal metabolites in humans. The resulting information is then combined with production levels and patterns of use to assess potential human exposure. The nature and extent of any further testing should be based strictly on the need to fill essential information gaps in order to generate adequate risk assessments, and should rely on non-animal methods, as far as possible. The scheme also includes a feedback loop, so that new information is used to improve the predictivity of computational expert systems. Several recommendations are made, the most important of which is that the European Union (EU) should actively promote the improvement and validation of (Q)SAR models and expert systems, and computer-based methods for biokinetic modelling, since these offer the most realistic and most economical solution to the need to test large numbers of chemicals.  相似文献   

13.
In its White Paper, Strategy for a Future Chemicals Policy, published in 2001, the European Commission (EC) proposed the REACH (Registration, Evaluation and Authorisation of CHemicals) system to deal with both existing and new chemical substances. This system is based on a top-down approach to toxicity testing, in which the degree of toxicity information required is dictated primarily by production volume (tonnage). If testing is to be based on traditional methods, very large numbers of laboratory animals could be needed in response to the REACH system, causing ethical, scientific and logistical problems that would be incompatible with the time-schedule envisaged for testing. The EC has emphasised the need to minimise animal use, but has failed to produce a comprehensive strategy for doing so. The present document provides an overall scheme for predictive toxicity testing, whereby the non-animal methods identified and discussed in a recent and comprehensive ECVAM document, could be used in a tiered approach to provide a rapid and scientifically justified basis for the risk assessment of chemicals for their toxic effects in humans. The scheme starts with a preliminary risk assessment process (involving available information on hazard and exposure), followed by testing, based on physicochemical properties and (Q)SAR approaches. (Q)SAR analyses are used in conjunction with expert system and biokinetic modelling, and information on metabolism and identification of the principal metabolites in humans. The resulting information is then combined with production levels and patterns of use to assess potential human exposure. The nature and extent of any further testing should be based strictly on the need to fill essential information gaps in order to generate adequate risk assessments, and should rely on non-animal methods, as far as possible. The scheme also includes a feedback loop, so that new information is used to improve the predictivity of computational expert systems. Several recommendations are made, the most important of which is that the European Union (EU) should actively promote the improvement and validation of (Q)SAR models and expert systems, and computer-based methods for biokinetic modelling, since these offer the most realistic and most economical solution to the need to test large numbers of chemicals.  相似文献   

14.
Two procedures for predicting the carcinogenicity of chemicals are described. One of these (CASE) is a self-learning artificial intelligence system that automatically recognizes activating and/or deactivating structural subunits of candidate chemicals and uses this to determine the probability that the test chemical is or is not a carcinogen. If the chemical is predicted to be carcinogen, CASE also projects its probable potency.

The second procedure (CPBS) uses Bayesian decision theory to predict the potential carcinogenicity of chemicals based upon the results of batteries of short-term assays. CPBS is useful even if the test results are mixed (i.e. both positive and negative responses are obtained in different genotoxic assays). CPBS can also be used to identify highly predictive as well as cost-effective batteries of assays.

For illustrative purposes the ability of CASE and CPBS to predict the carcinogenicity of a carcinogenic and a non-carcinogenic polycyclic aromatic hydrocarbon is shown. The potential for using the two methods in tandem to increase reliability and decrease cost is presented.  相似文献   


15.
The regulation of human exposure to potentially carcinogenic chemicals constitutes society's most important use of animal carcinogenicity data. Environmental contaminants of greatest concern within the USA are listed in the Environmental Protection Agency's (EPA's) Integrated Risk Information System (IRIS) chemicals database. However, of the 160 IRIS chemicals lacking even limited human exposure data but possessing animal data that had received a human carcinogenicity assessment by 1 January 2004, we found that in most cases (58.1%; 93/160), the EPA considered animal carcinogenicity data inadequate to support a classification of probable human carcinogen or non-carcinogen. For the 128 chemicals with human or animal data also assessed by the World Health Organisation's International Agency for Research on Cancer (IARC), human carcinogenicity classifications were compatible with EPA classifications only for those 17 having at least limited human data (p = 0.5896). For those 111 primarily reliant on animal data, the EPA was much more likely than the IARC to assign carcinogenicity classifications indicative of greater human risk (p < 0.0001). The IARC is a leading international authority on carcinogenicity assessments, and its significantly different human carcinogenicity classifications of identical chemicals indicate that: 1) in the absence of significant human data, the EPA is over-reliant on animal carcinogenicity data; 2) as a result, the EPA tends to over-predict carcinogenic risk; and 3) the true predictivity for human carcinogenicity of animal data is even poorer than is indicated by EPA figures alone. The EPA policy of erroneously assuming that tumours in animals are indicative of human carcinogenicity is implicated as a primary cause of these errors.  相似文献   

16.
D Clive 《Mutation research》1988,205(1-4):313-330
The present analysis examines the assumptions in, the perceptions and predictivity of and the need for short-term tests (STTs) for genotoxicity in light of recent findings that most noncarcinogens from the National Toxicology Program are genotoxic (i.e., positive in one or more in vitro STTs). Reasonable assumptions about the prevalence for carcinogens (1-10% of all chemicals), the sensitivity of these STTs (ca. 90% of all carcinogens are genotoxic) and their estimated "false positive" incidence (60-75%) imply that the majority of chemicals elicit genotoxic responses and, consequently, that most in vitro genotoxins are likely to be noncarcinogenic. Thus, either the usual treatment conditions used in these in vitro STTS are producing a large proportion of artifactual and meaningless positive results or else in vitro mutagenicity is too common a property of chemicals to serve as a useful predictor of carcinogenicity or other human risk. In contrast, the limited data base on in vivo STTs suggests that the current versions of these assays may have low sensitivity which appears unlikely to improve without dropping either their 'short-term' aspect or the rodent carcinogenicity benchmark. It is suggested that in vivo genotoxicity protocols be modified to take into consideration both the fundamentals of toxicology as well as the lessons learned from in vitro genetic toxicology. In the meantime, while in vivo assays are undergoing rigorous validation, genetic toxicology, as currently practiced, should not be a formal aspect of chemical or drug development on the grounds that it is incapable of providing realistic and reliable information on human risk. It is urged that data generated in new, unvalidated in vivo genotoxicity assays be exempted from the normal regulatory reporting requirements in order to encourage industry to participate in the laborious and expensive development of this next phase of genetic toxicology.  相似文献   

17.
Doses of chemicals which induce hepatocellular necrosis usually induce hepatic tumours if the dosing is frequent and is maintained for long periods. Such necrosis is usually evident within 48 h of the first administration. Similarly, chemicals that lead to marked proliferation of peroxisomes in the liver also usually induce hepatic tumours on pretracked regular dosing. For both of these phenomena failure to produce a certain level of effect, or to maintain it for sufficiently long periods, can result in the observation of a non-carcinogenic response. The exact dose/time requirements for carcinogenicity have not been defined and may be species/strain/sex-specific. Some chemicals induce liver enlargement and mitogenesis in the absence of overt hepatotoxic effects. The early phases of hepatomegaly are associated with mitogenic effects that can be measured as cells in S-phase within the first few days of administration. The later stages of hepatomegaly appear to be associated more with cellular hypertrophy. Both effects appear to be threshold-related. Further, sustained hepatomegaly is associated with proliferation of SER and the induction of a range of liver enzymes. These changes (mitogenesis, hepatomegaly, enzyme induction), in isolation, are less definitive indicators of carcinogenicity, but they occur for a sufficient number of liver-specific carcinogens that their role as early indicators is worthy of confirmed study. The major area of study required for all possible early markers of hepatocarcinogenicity is to establish the dose and time dependence of these changes in relation to the eventual appearance of tumours. Finally, the specificity of all these markers require evaluation by the study of appropriate non-carcinogens.  相似文献   

18.
There has been a current resurgence of interest in the use of cell transformation for predicting carcinogenicity, which is based mainly on rodent carcinogenicity data. In view of this renewed interest, this paper critically reviews the published literature concerning the ability of the available assays to detect IARC Group 1 agents (known human carcinogens) and Group 2A agents (probable human carcinogens). The predictivity of the available assays for human and rodent non-genotoxic carcinogens (NGCs), in comparison with standard and supplementary in vitro and in vivo genotoxicity tests, is also discussed. The principal finding is that a surprising number of human carcinogens have not been tested for cell transformation across the three main assays (SHE, Balb/c 3T3 and C3H10T1/2), confounding comparative assessment of these methods for detecting human carcinogens. This issue is not being addressed in the ongoing validation studies for the first two of these assays, despite the lack of any serious logistical issues associated with the use of most of these chemicals. In addition, there seem to be no plans for using exogenous bio-transformation systems for the metabolic activation of pro-carcinogens, as recommended in an ECVAM workshop held in 1999. To address these important issues, it is strongly recommended that consideration be given to the inclusion of more human carcinogens and an exogenous source of xenobiotic metabolism, such as an S9 fraction, in ongoing and future validation studies. While cell transformation systems detect a high level of NGCs, it is considered premature to rely only on this endpoint for screening for such chemicals, as recently suggested. This is particularly important, in view of the fact that there is still doubt as to the relevance of morphological transformation to tumorigenesis in vivo, and the wide diversity of potential mechanisms by which NGCs are known to act. Recent progress with regard to increasing the objectivity of scoring the transformed phenotype, and prospects for developing human cell-based transformation assays, are reviewed.  相似文献   

19.
In a series of papers, Ames and colleagues allege that the scientific and public health communities have perpetuated a series of 'misconceptions' that resulted in inaccurate identification of chemicals that pose potential human cancer risks, and misguided cancer prevention strategies and regulatory policies. They conclude that exposures to industrial and synthetic chemicals represent negligible cancer risks and that animal studies have little or no scientific value for assessing human risks. Their conclusions are based on flawed and untested assumptions. For instance, they claim that synthetic residues on food can be ignored because 99.99% of pesticides humans eat are natural, chemicals in plants are pesticides, and their potential to cause cancer equals that of synthetic pesticides. Similarly, Ames does not offer any convincing scientific evidence to justify discrediting bioassays for identifying human carcinogens. Ironically, their arguments center on a ranking procedure that relies on the same experimental data and extrapolation methods they criticize as being unreliable for evaluating cancer risks. We address their inconsistencies and flaws, and present scientific facts and our perspectives surrounding Ames' nine alleged misconceptions. Our conclusions agree with the International Agency for Research on Cancer, the National Toxicology Program, and other respected scientific organizations: in the absence of human data, animal studies are the most definitive for assessing human cancer risks. Animal data should not be ignored, and precautions should be taken to lessen human exposures. Dismissing animal carcinogenicity findings would lead to human cancer cases as the only means of demonstrating carcinogenicity of environmental agents. This is unacceptable public health policy.  相似文献   

20.
Differences between the results of numerical validation studies comparing in vitro and in vivo genotoxicity tests with the rodent cancer bioassay are leading to the perception that short-term tests predict carcinogenicity only with uncertainty. Consideration of factors such as the pharmacokinetic distribution of chemicals, the systems available for metabolic activation and detoxification, the ability of the active metabolite to move from the site of production to the target DNA, and the potential for expression of the induced lesions, strongly suggests that the disparate sensitivity of the different test systems is a major reason why numerical validation is not more successful. Furthermore, genotoxicity tests should be expected to detect only a subset of carcinogens, namely genotoxic carcinogens, rather than those carcinogens that appear to act by non-genetic mechanisms. Instead of relying primarily on short-term in vitro genotoxicity tests to predict carcinogenic activity, these tests should be used in a manner that emphasizes the accurate determination of mutagenicity or clastogenicity. It must then be determined whether the mutagenic activity is further expressed as carcinogenicity in the appropriate studies using test animals. The prospects for quantitative extrapolation of in vitro or in vivo genotoxicity test results to carcinogenicity requires a much more precise understanding of the critical molecular events in both processes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号