首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Purpose

This paper aims at assessing the appropriateness at the system level of different Life Cycle Inventory (LCI) data sets (including default models) selected by the Life Cycle Assessment (LCA) practitioner. This means that the uncertainty measurements are applied on some specific main parameters of the LCI data set instead of measured input values. This approach aims at providing a pragmatic method to approximate and reduce the uncertainties resulting from a lack of information on a specific step.

Methods

The method proposed in this paper to assess the percentage errors on appropriateness includes three main steps. First, different systems including different versions of the same process with technological or geographical changes are assessed. Second, a hierarchical cluster analysis (HCA) or a principal component analysis (PCA) is performed to identify the main variables influencing the results. Third, a multivariate analysis of variances (MANOVA) assesses the significance of the main variables on the results. An appropriate default model can be developed by setting the variables introducing high variations in results.

Results and discussion

When comparing the same spinning process located in different countries, the HCA enabled us to identify the electricity mix as the main variable influencing the results. The “world average default models” has proven inappropriate to represent country specific locations. When comparing spinning technologies, the PCAs identified the electricity and the cotton input required as the main variables influencing the results and explained the variations in results due to technological changes. The HCA performed on different yarn manufacturing procedures identified the location and the yarn thickness as the two main variables influencing the results. The MANOVA assessed the percentage marginal variance (PMV) explained by the variable location between 85 and 92 % for four impact categories. The MANOVA performed on different fabric manufacturing systems assessed the PMV explained by the variables harvest, spinning, and weaving locations above 68 % for all impact categories. The HCA and MANOVA analyses helped design an appropriate “technological average default model.”

Conclusions

From the identification of the main influencing variables (HCA and PCA) to the quantitative appropriateness assessment (MANOVA) and the development of appropriate default models, the method has proven effective in assisting the LCA practitioner in the modeling of textile manufacturing systems, and for other worldwide multi-step manufacturing systems.  相似文献   

2.
Interrelations between some forms of group variation (FGVs) (age, sex, geographic, inter-species, differences among breeds) of 12 to 15 measurable skull traits are studied in 6 mammal species (pine marten, polar fox, Przewalskii horse, and 3 jird species) by means of dispersion analysis (model III, MANOVA). The above FGVs are considered as factors in the MANOVA, and skull traits are considered as dependent variables. To obtaine commeasurables estimates for the FGVs, each of them is assessed numerically as a portion of its dispersion in the entire morphological disparity defined for each character (or a set of characters) by MANOVA. The data obtained indicate a wide diversity of interrelations between FGVs. It is shown that statistical analysis of significance of joint effects of FGVs does not substitute the analysis of numerical interrelations of their dispersion portions. It is concluded that it is unproductive to study such interrelations as simple "statistical regularities" like the Kluge-Kerfoot phenomenon, so the character sets are not to be considered as statistical ensembles. A kind of content-wise null-model for FGVs of measurable traits is formulated according to which there is a "background" age variation while other FGVs are its derivatives. Respectively, other factors structuring the morphological disparity under investigation being absent, a positive correlation between FGVs is to be anticipated (strong succession). With the significant deviations of the postulated correlation being observed, other factors regulating respective FGVs that cannot be reduced to the age variation are to be supposed (weak succession). Possible interpretations of interrelations between age variation and some other FGVs in carnivores are considered. Craniological variation in the Przewalskii horse is just slightly effected by maintenance conditions under its domestication, a significant influence of other factors is to be supposed. Negative correlation between geographic and inter-species differences in the jirds (genus Meriones) could be interpreted as an evidence for the speciation described by the punctuated equilibrium model.  相似文献   

3.
This review focuses on the analysis of temporal beta diversity, which is the variation in community composition along time in a study area. Temporal beta diversity is measured by the variance of the multivariate community composition time series and that variance can be partitioned using appropriate statistical methods. Some of these methods are classical, such as simple or canonical ordination, whereas others are recent, including the methods of temporal eigenfunction analysis developed for multiscale exploration (i.e. addressing several scales of variation) of univariate or multivariate response data, reviewed, to our knowledge for the first time in this review. These methods are illustrated with ecological data from 13 years of benthic surveys in Chesapeake Bay, USA. The following methods are applied to the Chesapeake data: distance-based Moran''s eigenvector maps, asymmetric eigenvector maps, scalogram, variation partitioning, multivariate correlogram, multivariate regression tree, and two-way MANOVA to study temporal and space–time variability. Local (temporal) contributions to beta diversity (LCBD indices) are computed and analysed graphically and by regression against environmental variables, and the role of species in determining the LCBD values is analysed by correlation analysis. A tutorial detailing the analyses in the R language is provided in an appendix.  相似文献   

4.
Biplots for multifactorial analysis of distance   总被引:1,自引:0,他引:1  
Krzanowski WJ 《Biometrics》2004,60(2):517-524
Many data sets in practice fit a multivariate analysis of variance (MANOVA) structure, but do not accord with MANOVA assumptions for their analysis. One way forward is to calculate the matrix of dissimilarities or distances between every pair of individuals, and then to conduct an analysis of distance on the resulting data. Various metric scaling plots can be used to interpret the results of the analysis. However, developments to date of this approach have focused mainly on the individuals in the sample, and little attention has been paid to the assessment of influence of the original variables on the results. The present article attempts to rectify this omission. We discuss the inclusion of biplots on all forms of metric scaling representations in the analysis of distance. Exact biplots will often be nonlinear so we propose a simple linear approximation, and contrast it with other simple linear possibilities. An example from ecology illustrates the methodology.  相似文献   

5.
Metabolomic technologies produce complex multivariate datasets and researchers are faced with the daunting task of extracting information from these data. Principal component analysis (PCA) has been widely applied in the field of metabolomics to reduce data dimensionality and for visualising trends within the complex data. Although PCA is very useful, it cannot handle multi-factorial experimental designs and, often, clear trends of biological interest are not observed when plotting various PC combinations. Even if patterns are observed, PCA provides no measure of their significance. Multivariate analysis of variance (MANOVA) applied to these PCs enables the statistical evaluation of main treatments and, more importantly, their interactions within the experimental design. The power and scope of MANOVA is demonstrated through two different factorially designed metabolomic investigations using Arabidopsis ethylene signalling mutants and their wild-type. One investigation has multiple experimental factors including challenge with the economically important pathogen Botrytis cinerea and also replicate experiments, while the second has different sample preparation methods and one level of replication ‘nested’ within the design. In both investigations there are specific factors of biological interest and there are also factors incorporated within the experimental design, which affect the data. The versatility of MANOVA is displayed by using data from two different metabolomic techniques; profiling using direct injection mass spectroscopy (DIMS) and fingerprinting using fourier transform infra-red (FT-IR) spectroscopy. MANOVA found significant main effects and interactions in both experiments, allowing a more complete and comprehensive interpretation of the variation within each investigation, than with PCA alone. Canonical variate analysis (CVA) was applied to investigate these effects and their biological significance. In conclusion, the application of MANOVA followed by CVA provided extra information than PCA alone and proved to be a valuable statistical addition in the overwhelming task of analysing metabolomic data.  相似文献   

6.
There is considerable interest in comparing genetic variance-covariances matrices (G matrix). However, present methods are difficult to implement and cannot readily be extended to incorporate effects of other variables such as habitat, sex, or location. In this paper I present a method based on MANOVA that can be done using only standard statistical packages (coding for the method using SPLUS is available from the author). The crux of the approach is to use the jackknife method to estimate the pseudovalues of the estimates; these estimates can then be used as datapoints in a MANOVA. I illustrate the method using two published datasets: (1) variation in G matrices resulting from differences in rearing condition, species, and sex in the crickets Gryllus firmus and G. pennsylvanicus; and (2) variation in G matrices associated with habitat and history in the amphipod Gammarus minus.  相似文献   

7.
Nested case-control (NCC) design is used frequently in epidemiological studies as a cost-effective subcohort sampling strategy to conduct biomarker research. Sampling strategy, on the other hoand, creates challenges for data analysis because of outcome-dependent missingness in biomarker measurements. In this paper, we propose inverse probability weighted (IPW) methods for making inference about the prognostic accuracy of a novel biomarker for predicting future events with data from NCC studies. The consistency and asymptotic normality of these estimators are derived using the empirical process theory and convergence theorems for sequences of weakly dependent random variables. Simulation and analysis using Framingham Offspring Study data suggest that the proposed methods perform well in finite samples.  相似文献   

8.
The purpose of this statistical analysis is to determine what factors are the major contributors to bacterial contamination of recovered human cadaveric tissue. In this study we analyzed factors that could contribute to an increased bacterial bioburden from recovered tissues using the following independent variables: (1) the physical recovery environment; (2) recovery before or after an autopsy; (3) the length of time from death to recovery; (4) the cause of death; (5) the length of time to complete recovery; (6) the number of staff involved with the tissue recovery; and (7) the impact of organ and skin recovery on musculoskeletal contamination rates.In these analyses we used analysis of variance of main effects on data from seven tissue banks. The scale of the analysis included 1036 donors each having multiple cultures to better control for the inherent large variation in this type of data. We looked at several dependent variables. The dependent variable that was most useful was percent positive cultures.The results of the combined data differed from analyzing the tissue banks individually. The differences in each tissue bank's procedures and techniques were responsible for most of the variability. Depending on how the data was organized, statistically significant increases in bioburden were seen with: (1) recoveries after autopsy; (2) location of the recovery; (3) length of time taken for a recovery; (4) size of the recovery team; and (5) the impact of organ and skin recovery on musculoskeletal contamination rates.In conclusion, statistical analysis of recovery cultures can be a powerful tool that may be used to indicate problems within any bank's recovery procedures or techniques.  相似文献   

9.
The interaction structure analysis (ISA) is proposed as a nonparametric procedure for the evaluation of uni- and multivariate analysis of variance models. Main effects and interactions of (independent) treatment variables are replaced by interaction-types where types are defined as those treatment-response combinations which occur significantly more often than expected under a null hypothesis (Ho) of no treatment effects. The application of ISA and the typological interpretation of ISA results are illustrated for an ANOVA design from toxicology and for a MANOVA design from psychopharmacology.  相似文献   

10.
Multiple imputation (MI) is used to handle missing at random (MAR) data. Despite warnings from statisticians, continuous variables are often recoded into binary variables. With MI it is important that the imputation and analysis models are compatible; variables should be imputed in the same form they appear in the analysis model. With an encoded binary variable more accurate imputations may be obtained by imputing the underlying continuous variable. We conducted a simulation study to explore how best to impute a binary variable that was created from an underlying continuous variable. We generated a completely observed continuous outcome associated with an incomplete binary covariate that is a categorized version of an underlying continuous covariate, and an auxiliary variable associated with the underlying continuous covariate. We simulated data with several sample sizes, and set 25% and 50% of data in the covariate to MAR dependent on the outcome and the auxiliary variable. We compared the performance of five different imputation methods: (a) Imputation of the binary variable using logistic regression; (b) imputation of the continuous variable using linear regression, then categorizing into the binary variable; (c, d) imputation of both the continuous and binary variables using fully conditional specification (FCS) and multivariate normal imputation; (e) substantive-model compatible (SMC) FCS. Bias and standard errors were large when the continuous variable only was imputed. The other methods performed adequately. Imputation of both the binary and continuous variables using FCS often encountered mathematical difficulties. We recommend the SMC-FCS method as it performed best in our simulation studies.  相似文献   

11.
Although the shape of fish scales is useful for determining stock membership, the role of extrinsic (e.g. habitat, food type) and intrinsic (e.g. growth) factors in determining variation in fish scales shape has not been determined. This study examined whether fish scale shape changes as a result of compensatory growth in juveniles of the cyprinid roach, Rutilus rutilus (L.) reared on a fish farm in the UK. This was analyzed using geometric morphometric methods. Sufficient evidence was generated to accept the assumption that food availability and type between different growing-out facilities resulted in compensatory growth and this was sufficient to cause scale shape differences. This was tested using multivariate analysis of variance (MANOVA) analysis with the principal components scores of specimens (PCs) as dependent variables, to investigate whether fish scale shape (where the configuration of landmark coordinates were scaled, translated and rotated) form (where the configuration of landmark coordinates were translated and rotated not scaled) and allometry free (allometrically adjusting scale shape according to length) are related to holding facility (as fixed factor). Cross validated discriminant analysis was used to assess and compare the efficacy of shape, form and allometry free information. Identification rates are much better than chance with allometry free and shape alone, and classification is improved when size is taken into account.  相似文献   

12.

Background  

Accurate methods for extraction of meaningful patterns in high dimensional data have become increasingly important with the recent generation of data types containing measurements across thousands of variables. Principal components analysis (PCA) is a linear dimensionality reduction (DR) method that is unsupervised in that it relies only on the data; projections are calculated in Euclidean or a similar linear space and do not use tuning parameters for optimizing the fit to the data. However, relationships within sets of nonlinear data types, such as biological networks or images, are frequently mis-rendered into a low dimensional space by linear methods. Nonlinear methods, in contrast, attempt to model important aspects of the underlying data structure, often requiring parameter(s) fitting to the data type of interest. In many cases, the optimal parameter values vary when different classification algorithms are applied on the same rendered subspace, making the results of such methods highly dependent upon the type of classifier implemented.  相似文献   

13.
If the variables in MANOVA problem can be arranged according to the order of their importance, then J. ROY'S (1958) step-down procedure may be more appropriate than the conventional invariant inference techniques. However, it may often be possible only to identify subsets such that variables within subsets are equally important and subsets are of unequal importance. In experimental situations, it is common to have a set of variables of primary interest and another of “addon” variables. The step-down reasoning is extended to such cases and a set of simultaneous confidence bounds based upon the procedure which uses the largest root criterion at each stage are derived. The confidence bounds are on all linear functions of means only that do not involve nuisance parameters, and are therefore suitable for studying the configuration of means. This method yields shorter intervals for contrasts among the means of the variables of primary interest compared with the conventional intervals based upon the largest root. The method is illustrated using BARNARD'S data (1935) on skull characters.  相似文献   

14.

Background  

The most popular methods for significance analysis on microarray data are well suited to find genes differentially expressed across predefined categories. However, identification of features that correlate with continuous dependent variables is more difficult using these methods, and long lists of significant genes returned are not easily probed for co-regulations and dependencies. Dimension reduction methods are much used in the microarray literature for classification or for obtaining low-dimensional representations of data sets. These methods have an additional interpretation strength that is often not fully exploited when expression data are analysed. In addition, significance analysis may be performed directly on the model parameters to find genes that are important for any number of categorical or continuous responses. We introduce a general scheme for analysis of expression data that combines significance testing with the interpretative advantages of the dimension reduction methods. This approach is applicable both for explorative analysis and for classification and regression problems.  相似文献   

15.
16.
Spatial interpolation methods have been applied to many disciplines. Many factors affect the performance of the methods, but there are no consistent findings about their effects. In this study, we use comparative studies in environmental sciences to assess the performance and to quantify the impacts of data properties on the performance. Two new measures are proposed to compare the performance of the methods applied to variables with different units/scales. A total of 53 comparative studies were assessed and the performance of 72 methods/sub-methods compared is analysed. The impacts of sample density, data variation and sampling design on the estimations of 32 methods are quantified using data derived from their application to 80 variables. Inverse distance weighting (IDW), ordinary kriging (OK), and ordinary co-kriging (OCK) are the most frequently used methods. Data variation is a dominant impact factor and has significant effects on the performance of the methods. As the variation increases, the accuracy of all methods decreases and the magnitude of decrease is method dependent. Irregular-spaced sampling design might improve the accuracy of estimation. The effect of sampling density on the performance of the methods is found not to be significant. The implications of these findings are discussed.  相似文献   

17.
18.

Background

In hemodialysis patients, deviations from KDIGO recommended values of individual parameters, phosphate, calcium or parathyroid hormone (PTH), are associated with increased mortality. However, it is widely accepted that these parameters are not regulated independently of each other and that therapy aimed to correct one parameter often modifies the others. The aim of the present study is to quantify the degree of association between parameters of chronic kidney disease and mineral bone disease (CKD-MBD).

Methods

Data was extracted from a cohort of 1758 adult HD patients between January 2000 and June 2013 obtaining a total of 46.141 records (10 year follow-up). We used an advanced data analysis system called Random Forest (RF) which is based on self-learning procedure with similar axioms to those utilized for the development of artificial intelligence. This new approach is particularly useful when the variables analyzed are closely dependent to each other.

Results

The analysis revealed a strong association between PTH and phosphate that was superior to that of PTH and Calcium. The classical linear regression analysis between PTH and phosphate shows a correlation coefficient is 0.27, p<0.001, the possibility to predict PTH changes from phosphate modification is marginal. Alternatively, RF assumes that changes in phosphate will cause modifications in other associated variables (calcium and others) that may also affect PTH values. Using RF the correlation coefficient between changes in serum PTH and phosphate is 0.77, p<0.001; thus, the power of prediction is markedly increased. The effect of therapy on biochemical variables was also analyzed using this RF.

Conclusion

Our results suggest that the analysis of the complex interactions between mineral metabolism parameters in CKD-MBD may demand a more advanced data analysis system such as RF.  相似文献   

19.
Self-evidently, research in areas supporting "systems biology" such as genomics, proteomics, and metabonomics are critically dependent on the generation of sound analytical data. Metabolic phenotyping using LC-MS-based methods is currently at a relatively early stage of development, and approaches to ensure data quality are still developing. As part of studies on the application of LC-MS in metabonomics, the within-day reproducibility of LC-MS, with both positive and negative electrospray ionization (ESI), has been investigated using a standard "quality control" (QC) sample. The results showed that the first few injections on the system were not representative, and should be discarded, and that reproducibility was critically dependent on signal intensity. On the basis of these findings, an analytical protocol for the metabonomic analysis of human urine has been developed with proposed acceptance criteria based on a step-by-step assessment of the data. Short-term sample stability for human urine was also assessed. Samples were stable for at least 20 h at 4 degrees C in the autosampler while queuing for analysis. Samples stored at either -20 or -80 degrees C for up to 1 month were indistinguishable on subsequent LC-MS analysis. Overall, by careful monitoring of the QC data, it is possible to demonstrate that the "within-day" reproducibility of LC-MS is sufficient to ensure data quality in global metabolic profiling applications.  相似文献   

20.
We review some approaches of extreme value analysis in the context of biometrical applications. The classical extreme value analysis is based on iid random variables. Two different general methods are applied, which will be discussed together with biometrical examples. Different estimation, testing, goodness‐of‐fit procedures for applications are discussed. Furthermore, some non‐classical situations are considered where the data are possibly dependent, where a non‐stationary behavior is observed in the data or where the observations are not univariate. A few open problems are also stated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号