首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Goal Scope Background  The main focus in OMNIITOX is on characterisation models for toxicological impacts in a life cycle assessment (LCA) context. The OMNIITOX information system (OMNIITOX IS) is being developed primarily to facilitate characterisation modelling and calculation of characterisation factors to provide users with information necessary for environmental management and control of industrial systems. The modelling and implementation of operational characterisation models on eco and human toxic impacts requires the use of data and modelling approaches often originating from regulatory chemical risk assessment (RA) related disciplines. Hence, there is a need for a concept model for the data and modelling approaches that can be interchanged between these different contexts of natural system model approaches. Methods. The concept modelling methodology applied in the OMNIITOX project is built on database design principles and ontological principles in a consensus based and iterative process by participants from the LCA, RA and environmental informatics disciplines. Results. The developed OMNIITOX concept model focuses on the core concepts of substance, nature framework, load, indicator, and mechanism, with supplementary concepts to support these core concepts. They refer to the modelled cause, effect, and the relation between them, which are aspects inherent in all models used in the disciplines within the scope of OMNIITOX. This structure provides a possibility to compare the models on a fundamental level and a language to communicate information between the disciplines and to assess the possibility of transparently reusing data and modelling approaches of various levels of detail and complexity. Conclusions  The current experiences from applying the concept model show that the OMNIITOX concept model increases the structuring of all information needed to describe characterisation models transparently. From a user perspective the OMNIITOX concept model aids in understanding the applicability, use of a characterisation model and how to interpret model outputs. Recommendations and Outlook  The concept model provides a tool for structured characterisation modelling, model comparison, model implementation, model quality management, and model usage. Moreover, it could be used for the structuring of any natural environment cause-effect model concerning other impact categories than toxicity.  相似文献   

2.
Goal, Scope and Background  The EU 5th framework project OMNIITOX will develop models calculating characterisation factors for assessing the potential toxic impacts of chemicals within the framework of LCA. These models will become accessible through a web-based information system. The key objective of the OMNIITOX project is to increase the coverage of substances by such models. In order to reach this objective, simpler models which need less but available data, will have to be developed while maintaining scientific quality. Methods. Experience within the OMNIITOX project has taught that data availability and quality are crucial issues for calculating characterisation factors. Data availability determines whether calculating characterisation factors is possible at all, whereas data quality determines to what extent the resulting characterisation factors are reliable. Today, there is insufficient knowledge and/or resources to have high data availability as well as high data quality and high model quality at the same time. Results  The OMNIITOX project is developing two inter-related models in order to be able to provide LCA impact assessment characterisation factors for toxic releases for as broad a range of chemicals as possible: 1) A base model representing a state-of-the-art multimedia model and 2) a simple model derived from the base model using statistical tools. Discussion. A preliminary decision tree for using the OMNIITOX information system (IS) is presented. The decision tree aims to illustrate how the OMNIITOX IS can assist an LCA practitioner in finding or deriving characterisation factors for use in life cycle impact assessment of toxic releases. Conclusions and Outlook  Data availability and quality are crucial issues when calculating characterisation factors for the toxicity impact categories. The OMNIITOX project is developing a tiered model approach for this. It is foreseen that a first version of the base model will be ready in late summer of 2004, whereas a first version of the simple base model is expected a few months later.  相似文献   

3.

Background  

The rapid proliferation of biomedical text makes it increasingly difficult for researchers to identify, synthesize, and utilize developed knowledge in their fields of interest. Automated information extraction procedures can assist in the acquisition and management of this knowledge. Previous efforts in biomedical text mining have focused primarily upon named entity recognition of well-defined molecular objects such as genes, but less work has been performed to identify disease-related objects and concepts. Furthermore, promise has been tempered by an inability to efficiently scale approaches in ways that minimize manual efforts and still perform with high accuracy. Here, we have applied a machine-learning approach previously successful for identifying molecular entities to a disease concept to determine if the underlying probabilistic model effectively generalizes to unrelated concepts with minimal manual intervention for model retraining.  相似文献   

4.
Background and Objective  In the OMNIITOX project 11 partners have the common objective to improve environmental management tools for the assessment of (eco)toxicological impacts. The detergent case study aims at: i) comparing three Procter &c Gamble laundry detergent forms (Regular Powder-RP, Compact Powder-CP and Compact Liquid-CL) regarding their potential impacts on aquatic ecotoxicity, ii) providing insights into the differences between various Life Cycle Impact Assessment (LCIA) methods with respect to data needs and results and iii) comparing the results from Life Cycle Assessment (LCA) with results from an Environmental Risk Assessment (ERA). Material and Methods  The LCIA has been conducted with EDIP97 (chronic aquatic ecotoxicity) [1], USES-LCA (freshwater and marine water aquatic ecotoxicity, sometimes referred to as CML2001) [2, 3] and IMPACT 2002 (covering freshwater aquatic ecotoxicity) [4]. The comparative product ERA is based on the EU Ecolabel approach for detergents [5] and EUSES [6], which is based on the Technical Guidance Document (TGD) of the EU on Environmental Risk Assessment (ERA) of chemicals [7]. Apart from the Eco-label approach, all calculations are based on the same set of physico-chemical and toxicological effect data to enable a better comparison of the methodological differences. For the same reason, the system boundaries were kept the same in all cases, focusing on emissions into water at the disposal stage. Results and Discussion  Significant differences between the LCIA methods with respect to data needs and results were identified. Most LCIA methods for freshwater ecotoxicity and the ERA see the compact and regular powders as similar, followed by compact liquid. IMPACT 2002 (for freshwater) suggests the liquid is equally as good as the compact powder, while the regular powder comes out worse by a factor of 2. USES-LCA for marine water shows a very different picture seeing the compact liquid as the clear winner over the powders, with the regular powder the least favourable option. Even the LCIA methods which result in die same product ranking, e.g. EDIP97 chronic aquatic ecotoxicity and USES-LCA freshwater ecotoxicity, significantly differ in terms of most contributing substances. Whereas, according to IMPACT 2002 and USES-LCA marine water, results are entirely dominated by inorganic substances, the other LCIA methods and the ERA assign a key role to surfactants. Deviating results are mainly due to differences in the fate and exposure modelling and, to a lesser extent, to differences in the toxicological effect calculations. Only IMPACT 2002 calculates the effects based on a mean value approach, whereas all other LCIA methods and the ERA tend to prefer a PNEC-based approach. In a comparative context like LCA the OMNIITOX project has taken the decision for a combined mean and PNEC-based approach, as it better represents the ‘average’ toxicity while still taking into account more sensitive species. However, the main reason for deviating results remains in the calculation of the residence time of emissions in the water compartments. Conclusion and Outlook  The situation that different LCIA methods result in different answers to the question concerning which detergent type is to be preferred regarding the impact category aquatic ecotoxicity is not satisfactory, unless explicit reasons for the differences are identifiable. This can hamper practical decision support, as LCA practitioners usually will not be in a position to choose the ’right’ LCIA method for their specific case. This puts a challenge to the entire OMNIITOX project to develop a method, which finds common ground regarding fate, exposure and effect modelling to overcome the current situa-tion of diverging results and to reflect most realistic conditions.  相似文献   

5.
This article is the preamble to a set of articles describing initial results from an on-going European Commission funded, 5th Framework project called OMNIITOX, Operational Models aNd Information tools for Industrial applications of eco/TOXicological impact assessments. The different parts of this case study-driven project are briefly presented and put in relation to the aims of contributing to an operational life cycle-impact assessment (LCIA) model for impacts of toxicants. The present situation has been characterised by methodological difficulties, both regarding choice of the characterisation model(s) and limited input data on chemical properties, which often has resulted in the omission of toxicants from the LCIA, or at best focus on well characterised chemicals. The project addresses both problems and integrates models, as well as data, in an information system- the OMNIITOX IS. There is also a need for clarification of the relations between the (environmental) risk assessments of toxicants and LCIA, in addition to investigating the feasibility of introducing LCA into European chemicals legislation, tasks that also were addressed in the project. Keywords: Case studies; characterisation factor; chemicals; environmental risk assessment; hazard assessment; information system; life cycle impact assessment (LCIA); potentially toxic substances; regulation; risk assessment; risk ranking  相似文献   

6.
Goal, Scope and Background  The main aim of this paper is to present some methodological considerations concerning existing methods used to assess quality of the LCA study. It relates mainly to the quality of data and the uncertainty of the LCA results. The first paper is strictly devoted to methodological aspects whereas, the second is presented in a separate article (Part II) and devoted mainly to a case study. Methods  The presented analysis is based on two well-known concepts: the Data Quality Indicators (DQIs) and the Pedigree Matrix. In the first phase, the Sensitivity Indicators are created on the basis of the sensitivity analysis and then linked with the DQIs and the Quality Classes. These parameters indicate the relative importance of input data and their theoretical quality levels. Next, the Weidema’s Pedigree Matrix (slightly modified) is used to establish the values of the new parameter called the Data Quality Distance (DQD) and to link them with the DQIs and Quality Classes. This way the information about the “real” quality levels is provided. Further analysis is performed using the probabilistic distributions and Monte Carlo simulations. Results and Discussion  Thanks to this approach it is possible to make a comparison between two types of the quality factors. On the one hand, the sensitivity analysis allows one to check the importance of input data and to determine their required quality. It is done according to the following relation: the higher the sensitivity indicator, the higher the importance of input data and the higher quality should be demanded. On the other hand the data have a certain real quality, not always in accord with the demanded one. To make possible a comparison between these two types of quality, it is necessary to find and develop a common denominator for them. Here, for this purpose the DQIs and Quality Classes are used. Conclusions  In the further stage of the assessment the DQIs are used to perform the uncertainty analysis of the LCA results. The results could be additionally analysed by using other techniques of interpretation: the sensitivity-, the contribution-, the comparative-, the discernability- and the uncertainty analysis. Recommendations and Outlook  The presented approach is put into practice to conduct the comparative LCA study for the industrial pumps by using the Ecoindicator99 method. Thanks to this, complex analysis of the credibility of the results is carried out. As a consequence, uncertainty ranges for the LCA results of every product system can be determined [1].  相似文献   

7.
The ability to assess the potential genotoxicity, carcinogenicity, or other toxicity of pharmaceutical or industrial chemicals based on chemical structure information is a highly coveted and shared goal of varied academic, commercial, and government regulatory groups. These diverse interests often employ different approaches and have different criteria and use for toxicity assessments, but they share a need for unrestricted access to existing public toxicity data linked with chemical structure information. Currently, there exists no central repository of toxicity information, commercial or public, that adequately meets the data requirements for flexible analogue searching, Structure-Activity Relationship (SAR) model development, or building of chemical relational databases (CRD). The distributed structure-searchable toxicity (DSSTox) public database network is being proposed as a community-supported, web-based effort to address these shared needs of the SAR and toxicology communities. The DSSTox project has the following major elements: (1) to adopt and encourage the use of a common standard file format (structure data file (SDF)) for public toxicity databases that includes chemical structure, text and property information, and that can easily be imported into available CRD applications; (2) to implement a distributed source approach, managed by a DSSTox Central Website, that will enable decentralized, free public access to structure-toxicity data files, and that will effectively link knowledgeable toxicity data sources with potential users of these data from other disciplines (such as chemistry, modeling, and computer science); and (3) to engage public/commercial/academic/industry groups in contributing to and expanding this community-wide, public data sharing and distribution effort. The DSSTox project's overall aims are to effect the closer association of chemical structure information with existing toxicity data, and to promote and facilitate structure-based exploration of these data within a common chemistry-based framework that spans toxicological disciplines.  相似文献   

8.
Google proposed the concept of knowledge graph to improve the quality of searching results in 2012, so the knowledge graph brought a hot topic in the field of academic and industry. The knowledge graph can effectively improve the searching quality and the accuracy of Q & A system, which is a hot issue. With the help of agricultural experts, this paper based on the Agricultural Thesaurus, determines the rules for judging whether a thesaurus is a concept or an entity, establishes maps from Agricultural Thesaurus to the agricultural knowledge graph schema layer and the data layer. Therefore, the large-scale automatic building is realized from the Agricultural Thesaurus to the agricultural knowledge graph. In order to effectively manage and utilize triples, this paper proposes a mathematical model for the management of triples with the RDF-based triple storage pattern, which lays the solid foundation for the semantic-based agricultural information retrieving and the construction of the Q & A system.  相似文献   

9.
Quantifying the quality of coral bleaching predictions   总被引:2,自引:0,他引:2  
Techniques that utilize sea surface temperature (SST) observations to predict coral reef bleaching are in common use and form the foundation for predicted global coral reef ecosystem demise within this century. Yet, quality assessments of these methods are typically qualitative or anecdotal. Quality is the correspondence of forecasts with observations and has standard quantitative measures. Here a forecast verification method, commonly used in meteorology, is presented and used to measure the quality of the degree heating weeks (DHW) technique as an exploration of insights that can be gleaned from this methodology. DHW values were calculated from NOAA Optimum Interpolation SST version 2 data and compared to a database of bleaching observations from 1990–2007. Quality is expressed with an objective measure, the Peirce Skill Score (PSS). The quality at varying DHW thresholds above which bleaching was projected to occur is calculated. By selecting the thresholds that maximize quality, the predictive technique is objectively optimized. This results in optimal threshold maps, showing reefs more prone and more resistant to bleaching. Optimization increases the quality of DHW as a predictor of bleaching from PSS = 0.55 to PSS = 0.83, in global average, but the optimal PSS and corresponding DHW values vary significantly from location to location. The coral reef research and management community are urged to adopt the simple, but rigorous tools of forecast verification routinely used in other disciplines so that bleaching forecasts can be quantitatively compared and their quality improved.  相似文献   

10.
A comparison, in terms of the optimal energy that maximizes the image quality between digital breast tomosynthesis (DBT) and digital mammography (DM) was performed in a MAMMOMAT Inspiration system (Siemens) based on amorphous selenium flat panel detector. In this paper we measured the image quality by the signal difference-to-noise ratio (SDNR), and the patient risk by the mean glandular dose (MGD). Using these quantities we compared the optimal voltage that maximizes the image quality both in breast tomosynthesis and standard mammography acquisition mode. The comparison for the two acquisition modes was performed for a W/Rh anode filter combinations by using a 4.5 cm tissue equivalent mammography phantom. Moreover, in order to check if the used equipment was quantum noise limited, the relation of the relative noise with respect to the detector dose was evaluated. Results showed that in the tomosynthesis acquisition mode the optimal voltage is 28 kV, whereas in standard mammography the optimal voltage is 30 kV. The automatic exposure control (AEC) of the system selects 28 kV as optimal voltage both for DBT and DM. Monte Carlo simulations showed a qualitative agreement with the AEC selection system, since an optimal monochromatic energy of 20 keV was found both for DBT and DM. Moreover, the check about the noise showed that the system is not completely quantum noise limited, and this issue could explain the experimental slight difference in terms of optimal voltage between DBT and DM. According to these results, the use of higher voltage settings is not justified for the improvement of the image quality during a DBT examination.  相似文献   

11.
Cardiovascular disease is the most common cause of death, accounting for 31% of deaths worldwide. As purely synthetic grafts implicate concomitant anticoagulation and autologous veins are rare, tissue‐engineered vascular grafts are urgently needed. For successful in vitro cultivation of a bioartificial vascular graft, the suitable bioreactor should provide conditions comparable to vasculogenesis in the body. Such a system has been developed and characterized under continuous and pulsatile flow, and a variety of sensors has been integrated into the bioreactor to control parameters such as temperature, pressure up to 500 mbar, glucose up to 4.5 g/L, lactate, oxygen up to 150 mbar, and flow rate. Wireless data transfer (using the ZigBee specification based on the IEEE 802.15.4 standard) and multiple corresponding sensor signal processing platforms have been implemented as well. Ultrasound is used for touchless monitoring of the growing vascular structure as a quality control before implantation (maximally achieved ultrasound resolution 65 μm at 15 MHz). To withstand the harsh conditions of steam sterilization (120°C for 20 min), all electronics were encapsulated. With such a comprehensive physiologically conditioning, sensing, and imaging bioreactor system, all the requirements for a successful cultivation of vascular grafts are available now.  相似文献   

12.

Background  

The objectives of this pilot study were to evaluate treatment quality for the risk factors of hypertension, diabetes and hyperlipidemia as well as the overall treatment quality for patients on an internal nephrology ward. This evaluation included the collection of data concerning the quality of therapeutic drug monitoring, drug use and potential drug-drug interactions. Establishing such baseline information highlights areas that have a need for further therapeutic intervention and creates a foundation for improving patient care, a subject that could be addressed in future clinical pharmacy research projects.  相似文献   

13.
Since the components of a sample for open metabolomic analysis are unknown a priori a pragmatic approach to method development has been taken in order to develop and select a chromatographic method suitable for high-throughput open metabolomic screening of urine by Ultra Performance Liquid Chromatography-Mass Spectrometry (UPLC-MS). A total of 848 injections of diluted rat urine were made onto a UPLC-ESI-ToF-MS system using several different gradient profiles and run times to determine a suitable method for analysis of urine from male and female rats. Peak integral and multivariate data analysis were performed to investigate the quality of separation and information obtained from these multiple analyses. A suitable 8 min method was selected and is now used routinely for open profiling metabolomic analyses of urine. The use of a sample-relevant QC mix is also discussed. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.  相似文献   

14.
The integration of spatial information concerning animal species into static, rule-based spatially explicit non-probabilistic models for decision-making regarding the planning of landscapes and regions provides generalised habitat-described landscape-structural parameters. As a basis for an individually developed model, a discussion is first of all presented which involves general data and parametric requirements, necessary for the development of a species-referenced, spatially explicit model for analysis and evaluation. The parameters necessary for an assessment of habitat characteristics of birds in Central Europe will be discussed on the basis of landscape and structural information, using the Corn Bunting (Emberiza calandra) as an example. A spatial analysis and assessment procedure supported by geographical information system (GIS) for this species has been developed for the definition of regulations and assessment categories and subsequently applied to the example of an open agricultural landscape in Saxony-Anhalt. Within the area of examination of approximately 42.4 km2, 56 songbird perches were located (density of 1.34 territories/km2). A comparison with the 45 mapped territories from the year 2004 indicated a good correlation with the model assumptions. Indeed, 16 of these 45 territories were only briefly occupied and the establishment of breeding pairs was ascertained in only 17 territories. The analysis and assessment model as presented yielded reality-based results after the utilisation of relatively little landscape-structural entry data, and is well suited for support of the decision-making process for spatial planning. The model framework presented in this paper can be modified and transferred to other species.  相似文献   

15.
Triplophysa is an endemic fish genus of the Tibetan Plateau in China. Triplophysa tibetana, which lives at a recorded altitude of ~4,000 m and plays an important role in the highland aquatic ecosystem, serves as an excellent model for investigating high‐altitude environmental adaptation. However, evolutionary and conservation studies of T. tibetana have been limited by scarce genomic resources for the genus Triplophysa. In the present study, we applied PacBio sequencing and the Hi‐C technique to assemble the T. tibetana genome. A 652‐Mb genome with 1,325 contigs with an N50 length of 3.1 Mb was obtained. The 1,137 contigs were further assembled into 25 chromosomes, representing 98.7% and 80.47% of all contigs at the base and sequence number level, respectively. Approximately 260 Mb of sequence, accounting for ~39.8% of the genome, was identified as repetitive elements. DNA transposons (16.3%), long interspersed nuclear elements (12.4%) and long terminal repeats (11.0%) were the most repetitive types. In total, 24,372 protein‐coding genes were predicted in the genome, and ~95% of the genes were functionally annotated via a search in public databases. Using whole genome sequence information, we found that T. tibetana diverged from its common ancestor with Danio rerio ~121.4 million years ago. The high‐quality genome assembled in this work not only provides a valuable genomic resource for future population and conservation studies of T. tibetana, but it also lays a solid foundation for further investigation into the mechanisms of environmental adaptation of endemic fishes in the Tibetan Plateau.  相似文献   

16.

Background  

Genomic analysis, particularly for less well-characterized organisms, is greatly assisted by performing comparative analyses between different types of genome maps and across species boundaries. Various providers publish a plethora of on-line resources collating genome mapping data from a multitude of species. Datasources range in scale and scope from small bespoke resources for particular organisms, through larger web-resources containing data from multiple species, to large-scale bioinformatics resources providing access to data derived from genome projects for model and non-model organisms. The heterogeneity of information held in these resources reflects both the technologies used to generate the data and the target users of each resource. Currently there is no common information exchange standard or protocol to enable access and integration of these disparate resources. Consequently data integration and comparison must be performed in anad hocmanner.  相似文献   

17.
MOTIVATION: Numerous database management systems have been developed for processing various taxonomic data bases on biological classification or phylogenetic information. In this paper, we present an integrated system to deal with interacting classifications and phylogenies concerning particular taxonomic groups. RESULTS: An information-theoretic view (taxon view) has been applied to capture taxonomic concepts as taxonomic data entities. A data model which is suitable for supporting semantically interacting dynamic views of hierarchic classifications and a query method for interacting classifications have been developed. The concept of taxonomic view and the data model can also be expanded to carry phylogenetic information in phylogenetic trees. We have designed a prototype taxonomic database system called HICLAS (HIerarchical CLAssification System) based on the concept of taxon view, and the data models and query methods have been designed and implemented. This system can be effectively used in the taxonomic revisionary process, especially when databases are being constructed by specialists in particular groups, and the system can be used to compare classifications and phylogenetic trees. AVAILABILITY: Freely available at the WWW URL: http://aims.cps.msu.edu/hiclas/ CONTACT: pramanik@cps.msu.edu; lotus@wipm.whcnc.ac.cn  相似文献   

18.
For the European Parliament and Commission to implement the Water Framework Directive (WFD), the water-quality indices that are currently used in Europe need to be compared and calibrated. This will facilitate the comparative assessment of ecological status throughout the European Union. According to the WFD, biologic indices should respond consistently to human impacts, using multimetric approaches and water-quality classification boundaries adjusted to a common set of normative definitions. The European Commission has started an intercalibration exercise to review biologic indices and harmonize class boundaries. We used data from rivers in Spain to compare the IBMWP (Iberian Biological Monitoring Working Party) index, which is commonly used by water authorities in Spain and by several research centers, with the Intercalibration Common Multimetric Index (ICM-Star), which was used as a standard in the intercalibration exercise. We also used data from Spanish rivers to compare the multimetric indices ICM-7 (based on quantitative data) and ICM-9 (based on qualitative data) with the IBMWP. ICM-7 and ICM-9 were proposed by the Mediterranean Geographical Intercalibration Group (Med-GIG). Additionally, we evaluated two new multimetric indices, developed specifically for macroinvertebrate communities inhabiting Mediterranean river systems. One of these is based on quantitative data (ICM-10), while the other is based on qualitative data (ICM-11a). The results show that the IBMWP index responds well to the stressor gradient present in our data, and correlates well with ICM-Star. Moreover, the IBMWP quality class boundaries were consistent with the intercalibration requirements of the WFD. However, multimetric indices showed a more linear relation with the stressor gradient in our data, and less variation in reference values. In addition, they may provide more statistical power for detecting potential environmental impacts. Multimetric indices produced similar results for quantitative and qualitative data. Thus, ICM-10 (also named IMMi-T) and ICM-11a (also named IMMi-L) indices could be used to meet European Commission requirements for assessing the water quality in Spanish Mediterranean rivers. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users. Handling editor: Joel Trexler  相似文献   

19.
A screening methodology is presented that utilizes the linear structure within the deterministic life cycle inventory (LCI) model. The methodology ranks each input data element based upon the amount it contributes toward the final output. The identified data elements along with their position in the deterministic model are then sorted into descending order based upon their individual contributions. This enables practitioners and model users to identify those input data elements that contribute the most in the inventory stage. Percentages of the top ranked data elements are then selected, and their corresponding data quality index (DQI) value is upgraded in the stochastic LCI model. Monte Carlo computer simulations are obtained and used to compare the output variance of the original stochastic model with modified stochastic model. The methodology is applied to four real-world beverage delivery system LCA inventory models for verification. This research assists LCA practitioners by streamlining the conversion process when converting a deterministic LCI model to a stochastic model form. Model users and decision-makers can benefit from the reduction in output variance and the increase in ability to discriminate between product system alternatives.  相似文献   

20.

Background  

In a high throughput setting, effective flow cytometry data analysis depends heavily on proper data preprocessing. While usual preprocessing steps of quality assessment, outlier removal, normalization, and gating have received considerable scrutiny from the community, the influence of data transformation on the output of high throughput analysis has been largely overlooked. Flow cytometry measurements can vary over several orders of magnitude, cell populations can have variances that depend on their mean fluorescence intensities, and may exhibit heavily-skewed distributions. Consequently, the choice of data transformation can influence the output of automated gating. An appropriate data transformation aids in data visualization and gating of cell populations across the range of data. Experience shows that the choice of transformation is data specific. Our goal here is to compare the performance of different transformations applied to flow cytometry data in the context of automated gating in a high throughput, fully automated setting. We examine the most common transformations used in flow cytometry, including the generalized hyperbolic arcsine, biexponential, linlog, and generalized Box-Cox, all within the BioConductor flowCore framework that is widely used in high throughput, automated flow cytometry data analysis. All of these transformations have adjustable parameters whose effects upon the data are non-intuitive for most users. By making some modelling assumptions about the transformed data, we develop maximum likelihood criteria to optimize parameter choice for these different transformations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号