首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Phylogenetic comparative methods use tree topology, branch lengths, and models of phenotypic change to take into account nonindependence in statistical analysis. However, these methods normally assume that trees and models are known without error. Approaches relying on evolutionary regimes also assume specific distributions of character states across a tree, which often result from ancestral state reconstructions that are subject to uncertainty. Several methods have been proposed to deal with some of these sources of uncertainty, but approaches accounting for all of them are less common. Here, we show how Bayesian statistics facilitates this task while relaxing the homogeneous rate assumption of the well-known phylogenetic generalized least squares (PGLS) framework. This Bayesian formulation allows uncertainty about phylogeny, evolutionary regimes, or other statistical parameters to be taken into account for studies as simple as testing for coevolution in two traits or as complex as testing whether bursts of phenotypic change are associated with evolutionary shifts in intertrait correlations. A mixture of validation approaches indicates that the approach has good inferential properties and predictive performance. We provide suggestions for implementation and show its usefulness by exploring the coevolution of ankle posture and forefoot proportions in Carnivora.  相似文献   

2.
We examined the effects of spatial frequency similarity and dissimilarity on human contour integration under various conditions of uncertainty. Participants performed a temporal 2AFC contour detection task. Spatial frequency jitter up to 3.0 octaves was applied either to background elements, or to contour and background elements, or to none of both. Results converge on four major findings. (1) Contours defined by spatial frequency similarity alone are only scarcely visible, suggesting the absence of specialized cortical routines for shape detection based on spatial frequency similarity. (2) When orientation collinearity and spatial frequency similarity are combined along a contour, performance amplifies far beyond probability summation when compared to the fully heterogenous condition but only to a margin compatible with probability summation when compared to the fully homogenous case. (3) Psychometric functions are steeper but not shifted for homogenous contours in heterogenous backgrounds indicating an advantageous signal-to-noise ratio. The additional similarity cue therefore not so much improves contour detection performance but primarily reduces observer uncertainty about whether a potential candidate is a contour or just a false positive. (4) Contour integration is a broadband mechanism which is only moderately impaired by spatial frequency dissimilarity.  相似文献   

3.
Bayesian inference is becoming a common statistical approach to phylogenetic estimation because, among other reasons, it allows for rapid analysis of large data sets with complex evolutionary models. Conveniently, Bayesian phylogenetic methods use currently available stochastic models of sequence evolution. However, as with other model-based approaches, the results of Bayesian inference are conditional on the assumed model of evolution: inadequate models (models that poorly fit the data) may result in erroneous inferences. In this article, I present a Bayesian phylogenetic method that evaluates the adequacy of evolutionary models using posterior predictive distributions. By evaluating a model's posterior predictive performance, an adequate model can be selected for a Bayesian phylogenetic study. Although I present a single test statistic that assesses the overall (global) performance of a phylogenetic model, a variety of test statistics can be tailored to evaluate specific features (local performance) of evolutionary models to identify sources failure. The method presented here, unlike the likelihood-ratio test and parametric bootstrap, accounts for uncertainty in the phylogeny and model parameters.  相似文献   

4.
COPI (coat protein I)-coated vesicles are implicated in various transport steps within the early secretory pathway. The major structural component of the COPI coat is the heptameric complex coatomer (CM). Recently, four isoforms of CM were discovered that may help explain various transport steps in which the complex has been reported to be involved. Biochemical studies of COPI vesicles currently use CM purified from animal tissue or cultured cells, a mixture of the isoforms, impeding functional and structural studies of individual complexes. Here we report the cloning into single baculoviruses of all CM subunits including their isoforms and their combination for expression of heptameric CM isoforms in insect cells. We show that all four isoforms of recombinant CM are fully functional in an in vitro COPI vesicle biogenesis assay. These novel tools enable functional and structural studies on CM isoforms and their subcomplexes and allow studying mutants of CM.  相似文献   

5.
Developments in whole genome biotechnology have stimulated statistical focus on prediction methods. We review here methodology for classifying patients into survival risk groups and for using cross-validation to evaluate such classifications. Measures of discrimination for survival risk models include separation of survival curves, time-dependent ROC curves and Harrell's concordance index. For high-dimensional data applications, however, computing these measures as re-substitution statistics on the same data used for model development results in highly biased estimates. Most developments in methodology for survival risk modeling with high-dimensional data have utilized separate test data sets for model evaluation. Cross-validation has sometimes been used for optimization of tuning parameters. In many applications, however, the data available are too limited for effective division into training and test sets and consequently authors have often either reported re-substitution statistics or analyzed their data using binary classification methods in order to utilize familiar cross-validation. In this article we have tried to indicate how to utilize cross-validation for the evaluation of survival risk models; specifically how to compute cross-validated estimates of survival distributions for predicted risk groups and how to compute cross-validated time-dependent ROC curves. We have also discussed evaluation of the statistical significance of a survival risk model and evaluation of whether high-dimensional genomic data adds predictive accuracy to a model based on standard covariates alone.  相似文献   

6.
Nonlinear mixed effects models allow investigating individual differences in drug concentration profiles (pharmacokinetics) and responses. Pharmacogenetics focuses on the genetic component of this variability. Two tests often used to detect a gene effect on a pharmacokinetic parameter are (1) the Wald test, assessing whether estimates for the gene effect are significantly different from 0 and (2) the likelihood ratio test comparing models with and without the genetic effect. Because those asymptotic tests show inflated type I error on small sample size and/or with unevenly distributed genotypes, we develop two alternatives and evaluate them by means of a simulation study. First, we assess the performance of the permutation test using the Wald and the likelihood ratio statistics. Second, for the Wald test we propose the use of the F-distribution with four different values for the denominator degrees of freedom. We also explore the influence of the estimation algorithm using both the first-order conditional estimation with interaction linearization-based algorithm and the stochastic approximation expectation maximization algorithm. We apply these methods to the analysis of the pharmacogenetics of indinavir in HIV patients recruited in the COPHAR2-ANRS 111 trial. Results of the simulation study show that the permutation test seems appropriate but at the cost of an additional computational burden. One of the four F-distribution-based approaches provides a correct type I error estimate for the Wald test and should be further investigated.  相似文献   

7.
Chikungunya, a mosquito-borne disease, is a growing threat in Brazil, where over 640,000 cases have been reported since 2017. However, there are often long delays between diagnoses of chikungunya cases and their entry in the national monitoring system, leaving policymakers without the up-to-date case count statistics they need. In contrast, weekly data on Google searches for chikungunya is available with no delay. Here, we analyse whether Google search data can help improve rapid estimates of chikungunya case counts in Rio de Janeiro, Brazil. We build on a Bayesian approach suitable for data that is subject to long and varied delays, and find that including Google search data reduces both model error and uncertainty. These improvements are largest during epidemics, which are particularly important periods for policymakers. Including Google search data in chikungunya surveillance systems may therefore help policymakers respond to future epidemics more quickly.  相似文献   

8.
Circular data originates in a wide range of scientific fields and can be analyzed on the basis of directional statistics and special distributions wrapped around the circumference. However, both propensity to transform non-linear to linear data and complexity of directional statistics limited the generalization of the circular paradigm in the animal breeding framework, among others. Here, we generalized a circular mixed (CM) model within the context of Bayesian inference. Three different parametrizations with different hierarchical structures were developed on basis of the von Mises distribution; moreover, both goodness of fit and predictive ability from each parametrization were compared through the analyses of 110 116 lambing distribution records collected from Ripollesa sheep herds between 1976 and 2017. The naive circular (NC) model only accounted for population mean and homogeneous circular variance, and reached the lowest goodness-of-fit and predictive ability. The CM model assumed a hierarchical structure for the population mean by accounting for systematic (ewe age and lambing interval) and permanent environmental sources of variation (flock-year-season and ewe). This improved goodness of fit by reducing both the deviance information criterion (DIC; −2520 units) and the mean square error (MSE; −12.4%) between simulated and predicted lambing data when compared against the NC model. Finally, the last parametrization expanded CM model by also assuming a hierarchical structure with systematic and permanent environmental factors for the variance parameter of the von Mises distribution (i.e. circular canalization (CC) model). This last model reached the best goodness of fit to lambing distribution data with a DIC estimate 5425 units lower than the one for NC model (MSE reduced 13.2%). The same pattern revealed when models were compared in terms of predictive ability. The superiority revealed by CC model emphasized the relevance of heteroskedasticity for the analysis of lambing distribution in the Ripollesa breed, and suggested potential applications for the sheep industry, even genetic selection for canalization. The development of CM models on the basis of the von Mises distribution has allowed to integrate flexible hierarchical structures accounting for different sources of variation and affecting both mean and dispersion terms. This must be viewed as a useful statistical tool with multiple applications in a wide range of research fields, as well as the livestock industry. The next mandatory step should be the inclusion of genetic terms in the hierarchical structure of the models in order to evaluate their potential contribution to current selection programs.  相似文献   

9.
Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.  相似文献   

10.
MOTIVATION: DNA microarrays have recently been used for the purpose of monitoring expression levels of thousands of genes simultaneously and identifying those genes that are differentially expressed. The probability that a false identification (type I error) is committed can increase sharply when the number of tested genes gets large. Correlation between the test statistics attributed to gene co-regulation and dependency in the measurement errors of the gene expression levels further complicates the problem. In this paper we address this very large multiplicity problem by adopting the false discovery rate (FDR) controlling approach. In order to address the dependency problem, we present three resampling-based FDR controlling procedures, that account for the test statistics distribution, and compare their performance to that of the na?ve application of the linear step-up procedure in Benjamini and Hochberg (1995). The procedures are studied using simulated microarray data, and their performance is examined relative to their ease of implementation. RESULTS: Comparative simulation analysis shows that all four FDR controlling procedures control the FDR at the desired level, and retain substantially more power then the family-wise error rate controlling procedures. In terms of power, using resampling of the marginal distribution of each test statistics substantially improves the performance over the na?ve one. The highest power is achieved, at the expense of a more sophisticated algorithm, by the resampling-based procedures that resample the joint distribution of the test statistics and estimate the level of FDR control. AVAILABILITY: An R program that adjusts p-values using FDR controlling procedures is freely available over the Internet at www.math.tau.ac.il/~ybenja.  相似文献   

11.
The European Centre for the Validation of Alternative Methods (ECVAM) Skin Irritation Task Force was established in 1996, to review the status of the development and validation of alternative tests for skin irritation and corrosion, and to identify appropriate non-animal tests for predicting human skin irritation that were sufficiently well-developed to be prevalidated and validated by ECVAM. The EpiDerm method, based on a reconstituted human skin model, was proposed as being sufficiently well advanced to enter a prevalidation (PV) study. Based on a review of test protocols, prediction models (PMs), and data submitted by test developers on ten specified chemicals, with 20% sodium lauryl sulphate as a reference standard, the task force recommended the inclusion of four other tests: EPISKIN and PREDISKIN, based on reconstituted human epidermis or on human skin; the non-perfused pig-ear test, based on pig skin; and the skin integrity function test (SIFT), with ex vivo mouse skin. The prevalidation study on these methods was funded by ECVAM, and took place during 1999-2000. The outcome of the PV study was that none of the methods was ready to enter a formal validation study, and that the protocols and PMs of the methods had to be improved in order to increase their predictive abilities. Improved protocols and PMs for the EpiDerm and EPISKIN methods, the pig ear test, and the SIFT were presented at an extended Task Force meeting held in May 2001. It was agreed that, in the short term, the performance of the revised and harmonised EpiDerm and EPISKIN methods, as well as the modified SIFT, should be evaluated in a further study with a new set of 20 test chemicals. In addition, it was decided that the SIFT and the pig ear test would be compared to see if common endpoints (transepidermal water loss, methyl green-pyronine stain) could be identified.  相似文献   

12.
Ninomiya Y  Fujisawa H 《Biometrics》2007,63(4):1135-1142
In genetics, we often encounter a large number of highly correlated test statistics. The most famous conservative bound for multiple comparison is Bonferroni's bound, which is suitable when the test statistics are independent but not when the test statistics are highly correlated. This article proposes a new conservative bound that is easily calculated without multiple integration and is a good approximation when the test statistics are highly correlated. The performance of the proposed method is evaluated by simulation and real data analysis.  相似文献   

13.
Mouse fibroblast senescence in vitro is an important model for the study of aging at cellular level. However, common laboratory mouse strains may have lost some important allele variations related to aging processes. In this study, growth in vitro of tail skin fibroblasts (TSFs) derived from a wild-derived stock, Pohnpei (Pohn) mice, differed from growth of control C57BL/6 J (B6) TSFs. Pohn TSFs exhibited higher proliferative ability, fewer apoptotic cells, decreased expression of Cip1, smaller surface areas, fewer cells positive for senescence associated-beta-galactosidase (SA-beta-gal) and greater resistance to H(2)O(2)-induced SA-beta-gal staining and Cip1 expression. These data suggest that TSFs from Pohn mice resist cellular senescence-like changes. Using large clone ratio (LCR) as the phenotype, a quantitative trait locus (QTL) analysis in a Pohn/B6 backcross population found four QTLs for LCR: Fcs1 on Chr 3 at 55 CM; Fcs2 on Chr X at 50 CM; Fcs3 on Chr 4 at 51 CM and Fcs4 on Chr 10 at 25 CM. Together, these four QTLs explain 26.1% of the variations in LCRs in the N2 population. These are the first QTLs reported that regulate fibroblast growth. Glutathione S transferase mu (GST-mu) genes are overrepresented in the 95% confidence interval of Fcs1, and Pohn TSFs have higher H(2)O(2)-induced GST-mu 4, 5 and 7 mRNA levels than B6 TSFs. These enzymes may protect Pohn TSFs from oxidation.  相似文献   

14.
The standard in vivo micronucleus (MN) test detects clastogenicity in hematopoietic cells and is not suitable for detecting chemicals that target the skin. Previously, we have developed an in vivo rodent skin MN test that is simple to perform and can be applied to several laboratory animals, including the hairless mouse-a species whose use simplifies the procedure of skin testing. In this paper, we report new data that confirms the predictive ability of the test. Following the application of 10 polycyclic aromatic hydrocarbons (7,12-dimethylbenz[a]anthracene; 3-methylcholanthrene; benzo[a]pyrene; dibenz[a,h]anthracene; benz[a]anthracene; dibenz[a,c]anthracene; chrysene; benzo[e]pyrene; pyrene; anthracene) with various degrees of genotoxicity to the dorsal skin of hairless mice, we found that these compounds caused MN production that in general correlated with their reported carcinogenicity. We believe that this test will be useful in detecting skin clastogens that test negative when analyzed using the standard micronucleus test.  相似文献   

15.
The task of modeling the distribution of a large number of tree species under future climate scenarios presents unique challenges. First, the model must be robust enough to handle climate data outside the current range without producing unacceptable instability in the output. In addition, the technique should have automatic search mechanisms built in to select the most appropriate values for input model parameters for each species so that minimal effort is required when these parameters are fine-tuned for individual tree species. We evaluated four statistical models—Regression Tree Analysis (RTA), Bagging Trees (BT), Random Forests (RF), and Multivariate Adaptive Regression Splines (MARS)—for predictive vegetation mapping under current and future climate scenarios according to the Canadian Climate Centre global circulation model. To test, we applied these techniques to four tree species common in the eastern United States: loblolly pine (Pinus taeda), sugar maple (Acer saccharum), American beech (Fagus grandifolia), and white oak (Quercus alba). When the four techniques were assessed with Kappa and fuzzy Kappa statistics, RF and BT were superior in reproducing current importance value (a measure of basal area in addition to abundance) distributions for the four tree species, as derived from approximately 100,000 USDA Forest Service’s Forest Inventory and Analysis plots. Future estimates of suitable habitat after climate change were visually more reasonable with BT and RF, with slightly better performance by RF as assessed by Kappa statistics, correlation estimates, and spatial distribution of importance values. Although RTA did not perform as well as BT and RF, it provided interpretive models for species whose distributions were captured well by our current set of predictors. MARS was adequate for predicting current distributions but unacceptable for future climate. We consider RTA, BT, and RF modeling approaches, especially when used together to take advantage of their individual strengths, to be robust for predictive mapping and recommend their inclusion in the ecological toolbox.  相似文献   

16.
Biological invasions are one of the major threats to biodiversity, especially in oceanic islands. In the Canary Islands, the relationships between plant Alien Species Richness (ASR) and their environmental and anthropogenic determinants were thoroughly investigated using ecological models. However, previous predictive models rarely accounted for spatial autocorrelation (SAC) and uncertainty of predictions, thus missing crucial information related to model accuracy and predictions reliability. In this study, we propose a Generalized Linear Spatial Model (GLSM) for ASR under a Bayesian framework on Tenerife Island. Our aim is to test whether the inclusion of SAC into the modelling framework could improve model performance resulting in more reliable predictions. Results demonstrated as accounting for SAC dramatically reduced the model's AIC (ΔAIC = 4423) and error magnitudes, showing also better performances in terms of goodness of fit. Calculation of uncertainty related to predicted values pointed out those areas where either the number of observations (e.g. under-sampled areas) or the reliability of the environmental predictors was lower (e.g. low spatial resolution in highly heterogeneous environments). Although our results confirmed what was already observed in other ecological studies, such as the important role of roads in ASR spread, methodological considerations on the applied modelling approach point out the importance of considering spatial autocorrelation and researcher's prior knowledge to increase the predictive power of statistical models as well as the correctness in terms of coefficients estimates. The proposed approach may serve as an essential management tools highlighting those portions of territory that will be more prone to biological invasions and where monitoring efforts should be addressed.  相似文献   

17.
Court sports often require more frequent changes of direction (COD) than field sports. Most court sports require 180 degrees turns over a small distance, so COD in such sports might be best evaluated with an agility test involving short sprints and sharp turns. The purposes of this study were to (a) quantify vertical and horizontal force during a COD task, (b) identify possible predictors of court-sport-specific agility performance, and (c) examine performance difference between National Collegiate Athletic Association Division I, II, and III athletes. Twenty-nine collegiate female volleyball players completed a novel agility test, countermovement (CM) and drop jump tests, and an isometric leg extensor test. The number of athletes by division was as follows: I (n = 9), II (n = 11), and III (n = 9). The agility test consisted of 4 5-meter sprints with 3 180 degrees turns, including 1 on a multiaxial force platform so that the kinetic properties of the COD could be identified. One-way analysis of variance revealed that Division I athletes had significantly greater countermovement jump heights than Division III, and the effect size comparisons (Cohen's d) showed large-magnitude differences between Division I and both Divisions II and III for jump height. No other differences in performance variables were noted between divisions, although effect sizes reached moderate values for some comparisons. Regression analysis revealed that CM displacement was a significant predictor of agility performance, explaining approximately 34% of the variance. Vertical force was found to account for much of the total force exerted during the contact phase of the COD task, suggesting that performance in the vertical domain may limit the COD task used herein. This study indicates that individuals with greater CM performance also have quicker agility times and suggests that training predominantly in the vertical domain may also yield improvements in certain types of agility performance. This may hold true even if such agility performance requires a horizontal component.  相似文献   

18.
Classical paper-and-pencil based risk assessment questionnaires are often accompanied by the online versions of the questionnaire to reach a wider population. This study focuses on the loss, especially in risk estimation performance, that can be inflicted by direct transformation from the paper to online versions of risk estimation calculators by ignoring the possibilities of more complex and accurate calculations that can be performed using the online calculators. We empirically compare the risk estimation performance between four major diabetes risk calculators and two, more advanced, predictive models. National Health and Nutrition Examination Survey (NHANES) data from 1999–2012 was used to evaluate the performance of detecting diabetes and pre-diabetes.American Diabetes Association risk test achieved the best predictive performance in category of classical paper-and-pencil based tests with an Area Under the ROC Curve (AUC) of 0.699 for undiagnosed diabetes (0.662 for pre-diabetes) and 47% (47% for pre-diabetes) persons selected for screening. Our results demonstrate a significant difference in performance with additional benefits for a lower number of persons selected for screening when statistical methods are used. The best AUC overall was obtained in diabetes risk prediction using logistic regression with AUC of 0.775 (0.734) and an average 34% (48%) persons selected for screening. However, generalized boosted regression models might be a better option from the economical point of view as the number of selected persons for screening of 30% (47%) lies significantly lower for diabetes risk assessment in comparison to logistic regression (p < 0.001), with a significantly higher AUC (p < 0.001) of 0.774 (0.740) for the pre-diabetes group.Our results demonstrate a serious lack of predictive performance in four major online diabetes risk calculators. Therefore, one should take great care and consider optimizing the online versions of questionnaires that were primarily developed as classical paper questionnaires.  相似文献   

19.
Correlative species distribution models are frequently used to predict species’ range shifts under climate change. However, climate variables often show high collinearity and most statistical approaches require the selection of one among strongly correlated variables. When causal relationships between species presence and climate parameters are unknown, variable selection is often arbitrary, or based on predictive performance under current conditions. While this should only marginally affect current range predictions, future distributions may vary considerably when climate parameters do not change in concert. We investigated this source of uncertainty using four highly correlated climate variables together with a constant set of landscape variables in order to predict current (2010) and future (2050) distributions of four mountain bird species in central Europe. Simulating different parameterization decisions, we generated a) four models including each of the climate variables singly, b) a model taking advantage of all variables simultaneously and c) an un‐weighted average of the predictions of a). We compared model accuracy under current conditions, predicted distributions under four scenarios of climate change, and – for one species – evaluated back‐projections using historical occurrence data. Although current and future variable‐correlations remained constant, and the models’ accuracy under contemporary conditions did not differ, future range predictions varied considerably in all climate change scenarios. Averaged models and models containing all climate variables simultaneously produced intermediate predictions; the latter, however, performed best in back‐projections. This pattern, consistent across different modelling methods, indicates a benefit from including multiple climate predictors in ambiguous situations. Variable selection proved to be an important source of uncertainty for future range predictions, difficult to control using contemporary information. Small, but diverging changes of climate variables, masked by constant overall correlation patterns, can cause substantial differences between future range predictions which need to be accounted for, particularly when outcomes are intended for conservation decisions.  相似文献   

20.
ECVAM sponsored a formal validation study on three in vitro tests for skin irritation, of which two employ reconstituted human epidermis models (EPISKIN, EpiDerm), and one, the skin integrity function test (SIFT), employs ex vivo mouse skin. The goal of the study was to assess whether the in vitro tests would correctly predict in vivo classifications according to the EU classification scheme, "R38" and "no label" (i.e. non-irritant). 58 chemicals (25 irritants and 33 non-irritants) were tested, having been selected to give broad coverage of physico-chemical properties, and an adequate distribution of irritancy scores derived from in vivo rabbit skin irritation tests. In Phase 1, 20 of these chemicals (9 irritants and 11 non-irritants) were tested with coded identities by a single lead laboratory for each of the methods, to confirm the suitability of the protocol improvements introduced after a prevalidation phase. When cell viability (evaluated by the MTT reduction test) was used as the endpoint, the predictive ability of both EpiDerm and EPISKIN was considered sufficient to justify their progression to Phase 2, while the predictive ability of the SIFT was judged to be inadequate. Since both the reconstituted skin models provided false predictions around the in vivo classification border (a rabbit Draize test score of 2), the release of a cytokine, interleukin-1alpha (IL-1alpha), was also determined. In Phase 2, each human skin model was tested in three laboratories, with 58 chemicals. The main endpoint measured for both EpiDerm and EPISKIN was cell viability. In samples from chemicals which gave MTT assay results above the threshold of 50% viability, IL-1alpha release was also measured, to determine whether the additional endpoint would improve the predictive ability of the tests. For EPISKIN, the sensitivity was 75% and the specificity was 81% (MTT assay only); with the combination of the MTT and IL-1alpha assays, the sensitivity increased to 91%, with a specificity of 79%. For EpiDerm, the sensitivity was 57% and the specificity was 85% (MTT assay only), while the predictive capacity of EpiDerm was not improved by the measurement of IL-1alpha release. Following independent peer review, in April 2007 the ECVAM Scientific Advisory Committee endorsed the scientific validity of the EPISKIN test as a replacement for the rabbit skin irritation method, and of the EpiDerm method for identifying skin irritants as part of a tiered testing strategy. This new alternative approach will probably be the first use of in vitro toxicity testing to replace the Draize rabbit skin irritation test in Europe and internationally, since, in the very near future, new EU and OECD Test Guidelines will be proposed for regulatory acceptance.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号