首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Goal and Background  LCIA procedures that have been used in the South Africa manufacturing industry include the CML, Ecopoints, EPS and Eco-indicators 95 and 99 procedures. The aim of this paper is to evaluate and compare the applicability of these European LCIA procedures within the South African context, using a case study. Methods  The five European methods have been evaluated based on the applicability of the respective classification, characterisation, normalization and weighting approaches for the South African situation. Impact categories have been grouped into air, water, land and mined abiotic resources for evaluation purposes. The evaluation and comparison is further based on a cradle-to-gate Screening Life Cycle Assessment (SLCA) case study of the production of dyed two-fold wool yarn in South Africa. Results and Discussion  Where land is considered as a separate category (CML, Eco-indicator 99 and EPS), the case study highlights this inventory constituent as the most important. Similarly, water usage is shown as the second most important in one LCIA procedure (EPS) where it is taken into account. However, the impact assessment modelling for these categories may not be applicable for the variance in South African ecosystems. If land and water is excluded from the interpretation, air emissions, coal usage, ash disposal, pesticides and chrome emissions to water are the important constituents in the South African wool industry. Conclusions  In most cases impact categories and procedures defined in the LCIA methods for air pollution, human health and mined abiotic resources are applicable in South Africa. However, the relevance of the methods is reduced where categories are used that impact ecosystem quality, as ecosystems differ significantly between South Africa and the European continent. The methods are especially limited with respect to water and land resources. Normalisation and weighting procedures may also be difficult to adapt to South African conditions, due to the lack of background information and social, cultural and political differences. Recommendations and Outlook  Further research is underway to develop a framework for a South African LCIA procedure, which will be adapted from the available European procedures. The wool SLCA must be revisited to evaluate and compare the proposed framework with the existing LCIA procedures.  相似文献   

2.
Timely release and communication of critical test results may have significant impact on medical decisions and subsequent patient outcomes. Laboratories therefore have an important responsibility and contribution to patient safety. Certification, accreditation and regulatory bodies also require that laboratories follow procedures to ensure patient safety, but there is limited guidance on best practices. In Australasia, no specific requirements exist in this area and critical result reporting practices have been demonstrated to be heterogeneous worldwide.Recognising the need for agreed standards and critical limits, the AACB started a quality initiative to harmonise critical result management throughout Australasia. The first step toward harmonisation is to understand current laboratory practices. Fifty eight Australasian laboratories responded to a survey and 36 laboratories shared their critical limits. Findings from this survey are compared to international practices reviewed in various surveys conducted elsewhere. For the successful operation of a critical result management system, critical tests and critical limits must be defined in collaboration with clinicians. Reporting procedures must include how critical results are identified; who can report and who can receive critical results; what is an acceptable timeframe within which results must be delivered or, if reporting fails, what escalation procedures should follow; what communication channels or systems should be used; what should be recorded and how; and how critical result procedures should be maintained and evaluated to assess impact on outcomes.In this paper we review the literature of current standards and recommendations for critical result management. Key elements of critical result reporting are discussed in view of the findings of various national surveys on existing laboratory practices, including data from our own survey in Australasia. Best practice recommendations are made that laboratories are expected to follow in order to provide high quality and safe service to patients.  相似文献   

3.
To correctly evaluate the glucose control system, it is crucial to account for both insulin sensitivity and secretion. The disposition index (DI) is the most widely accepted method to do so. The original paradigm (hyperbolic law) consists of the multiplicative product of indices related to insulin sensitivity and secretion, but more recently, an alternative formula has been proposed with the exponent α (power function law). Traditionally, curve-fitting approaches have been used to evaluate the DI in a population: the algorithmic implementations often introduce some critical issues, such as the assumption that one of the two indices is error free or the effects of the log transformation on the measurement errors. In this work, we review the commonly used approaches and show that they provide biased estimates. Then we propose a novel nonlinear total least square (NLTLS) approach, which does not need to use the approximations built in the previously proposed alternatives, and show its superiority. All of the traditional fit procedures, including NLTLS, account only for uncertainty affecting insulin sensitivity and secretion indices when they are estimated from noisy data. Thus, they fail when part of the observed variability is due to inherent differences in DI values between individuals. To handle this inevitable source of variability, we propose a nonlinear mixed-effects approach that describes the DI using population hyperparameters such as the population typical values and covariance matrix. On simulated data, this novel technique is much more reliable than the curve-fitting approaches, and it proves robust even when no or small population variability is present in the DI values. Applying this new approach to the analysis of real IVGTT data suggests a value of α significantly smaller than 1, supporting the importance of testing the power function law as an alternative to the simpler hyperbolic law.  相似文献   

4.
Denitrification causes important losses of N-fertilizer in rice-fields, where high temperature and high production of organic matter favour denitrification losses. Two techniques have been used to quantify the denitrification losses: the 15N technique, which can be used to quantify the amount finally incorporated, and the acetylyne inhibition technique which is a direct measure of the quantities lost.Both techniques were applied in enclosures (diameter = 44 cm) in the field while moreover bio-assays in 3 l glass beakers were carried out. In all experiments where nitrate was added we found a rapid decrease of nitrate; usually about 30–50% of the nitrate that disappeared was recovered as N2O. As in one experiment, in which we measured the N2O disappearance rate as well, the N2O itself decreased at a rather constant rate of 20% per day, a correction must be made for this N2O decrease in the calculations of the nitrate disappearance rate. Although we have only one series in which the decrease of N2O was measured, the mathematical analysis indicates that as much as 80% of the N-fertilizer is actually lost. This figure is in full agreement with the 15N experiments; if the 15N was applied early only about 7% was recovered in soil and plants, while if it was applied later (after 7 weeks) about 20% was incorporated.Denitrification rate could be fitted on an negative exponential regression line; the rate constant increased during the summer. It is suggested that organic matter caused this increase.During denitrification considerable quantities of nitrite appear, which later on disappear again by processes still unknown; the nature of the available organic matter may be important for this nitrite production.With N-serve we tried to inhibit NH3 oxidation. In this way we tried to prevent the considerable N losses and to demonstrate that the nitrite produced in our experiments was not derived from NH3 oxidation. N-serve, however, had very little influence. It is probably inactivated by absorption onto the sediments.From these results it is suggested that the efficiency of N-application may be considerably increased by using low doses of N-fertilizer, but applied late in the growing season, e.g. 7 weeks after sowing. This favours environmental protection as well.  相似文献   

5.
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel.The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling.The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.  相似文献   

6.
Cardiopulmonary bypass (CPB) procedures require a blood-gas exchanger (oxygenator) to temporarily replace the respiratory function of the lungs. In the past the majority of CPB procedures have been carried out with bubble oxygenators which effect gas exchange by dispersion of bubbles into the blood. Membrane oxygenators, on the other hand, utilize a hydrophobic gas permeable membrane between the blood and gas phases.Bubble oxygenators are being superseded by membrane types for CPB due to improvements in membrane technology and mass transfer efficiency. These advances are reviewed in this paper and are illustrated by reference to the gas exchange and operating characteristics of a number of clinical oxygenators designed for adult CPB.Membrane oxygenatorsare also being used for long-term support in the treatment of acute respiratory failure. Operated in a partial bypass circuit, the oxygenator may have to function for several days or weeks. In one particular treatment method, the rate of spontaneous breathing is controlled by the partial or total removal of the metabolic CO2 production by the membrane oxygenator. For this method, known as extracorporeal CO2 removal (ECCO2R), the oxygenator must be optimized for CO2 transfer at low blood flow rates. The suitability of clinical oxygenators for ECCO2R is discussed in terms of gas exchange and functionality over a prolonged operation.  相似文献   

7.
The tumour control probability (TCP) is a formalism derived to compare various treatment regimens of radiation therapy, defined as the probability that given a prescribed dose of radiation, a tumour has been eradicated or controlled. In the traditional view of cancer, all cells share the ability to divide without limit and thus have the potential to generate a malignant tumour. However, an emerging notion is that only a sub-population of cells, the so-called cancer stem cells (CSCs), are responsible for the initiation and maintenance of the tumour. A key implication of the CSC hypothesis is that these cells must be eradicated to achieve cures, thus we define TCPS as the probability of eradicating CSCs for a given dose of radiation. A cell surface protein expression profile, such as CD44high/CD24low for breast cancer or CD133 for glioma, is often used as a biomarker to monitor CSCs enrichment. However, it is increasingly recognized that not all cells bearing this expression profile are necessarily CSCs, and in particular early generations of progenitor cells may share the same phenotype. Thus, due to the lack of a perfect biomarker for CSCs, we also define a novel measurable TCPCD+, that is the probability of eliminating or controlling biomarker positive cells. Based on these definitions, we use stochastic methods and numerical simulations parameterized for the case of gliomas, to compare the theoretical TCPS and the measurable TCPCD+. We also use the measurable TCP to compare the effect of various radiation protocols.  相似文献   

8.
Scientific investigations depend on the reliability of the observations that can be made. This reliability is determined in part by the understanding of the techniques and technology used to make the observations. The limitations and the strengths of the methodology and the equipment used must be evaluated thoroughly. The extent to which this is and has been the case for the use of the metal based stains in neuroscience is the subject of this paper. I evaluate the metallic stains used for neuroscience from several perspectives. I review briefly the state of neurohistology prior to its “golden years,” 1870-1910. Then I trace the development of the silver based stains used for neurohistology. I wanted to discuss the reasoning used by the originators of the silver based techniques in developing their specific procedures, but discovered that while procedures may be published, the methods and ideas used to arrive at the final procedures are not usually described in published work.  相似文献   

9.
In this article, we address a missing data problem that occurs in transplant survival studies. Recipients of organ transplants are followed up from transplantation and their survival times recorded, together with various explanatory variables. Due to differences in data collection procedures in different centers or over time, a particular explanatory variable (or set of variables) may only be recorded for certain recipients, which results in this variable being missing for a substantial number of records in the data. The variable may also turn out to be an important predictor of survival and so it is important to handle this missing-by-design problem appropriately. Consensus in the literature is to handle this problem with complete case analysis, as the missing data are assumed to arise under an appropriate missing at random mechanism that gives consistent estimates here. Specifically, the missing values can reasonably be assumed not to be related to the survival time. In this article, we investigate the potential for multiple imputation to handle this problem in a relevant study on survival after kidney transplantation, and show that it comprehensively outperforms complete case analysis on a range of measures. This is a particularly important finding in the medical context as imputing large amounts of missing data is often viewed with scepticism.  相似文献   

10.
Since apoptosis is impaired in malignant cells overexpressing prosurvival Bcl-2 proteins, drugs mimicking their natural antagonists, BH3-only proteins, might overcome chemoresistance. Small molecule inhibitors of Bcl-XL function have been discovered from diverse structure classes using rational drug design as well as high-throughput screening (HTS) approaches. However, most of the BH3 mimetics that have been identified via screening based on fluorescence polarization displayed an affinity for their presumed protein targets that is far lower than that of BH3-only proteins. Therefore, it is important to establish a simple and inexpensive secondary platform for hit validation which is pertinent to current efforts for developing compounds that mimic the action of BH3-only proteins as novel anticancer agents. These considerations prompted us to explore the differential scanning fluorimetry (DSF) method that is based on energetic coupling between ligand binding and protein unfolding. We have systematically tested known Bcl-XL/Bcl-2 inhibitors using DSF and have revealed distinct subsets of inhibitors. More importantly, we report that some of these inhibitors interacted selectively with glutathione S-transferase tagged Bcl-XL, whereas certain inhibitors exhibited marked selectivity towards native untagged Bcl-XL. Therefore, we propose that the affinity tag may cause a significant conformational switch in the Bcl-XL, which results in the selectivity for certain subsets of small molecule inhibitors. This finding also implies that the previous screens involving tagged proteins need to be carefully reexamined while further investigations must ensure that the right conformation of protein is used in future screens.  相似文献   

11.
基于秦岭样区的四种时序EVI函数拟合方法对比研究   总被引:3,自引:0,他引:3  
刘亚南  肖飞  杜耘 《生态学报》2016,36(15):4672-4679
函数曲线拟合方法是植被指数时间序列重建的一个重要方法,已经广泛应用于森林面积动态变化监测、农作物估产、遥感物候信息提取、生态系统碳循环研究等领域。基于秦岭样区多年MODIS EVI遥感数据及其质量控制数据,探讨并改进了时序EVI重建过程中噪声点优化和对原始高质量数据保真能力的评价方法;在此基础上,比较了常用的非对称性高斯函数拟合法(AG)、双Logistic函数拟合法(DL)和单Logistic函数拟合法(SL)。基于SL方法,调整了模型形式并重新定义d的参数意义,提出了最值优化单Logistic函数拟合法(MSL),并与其他3种方法进行对比。结果表明;在噪声点优化及保留原始高质量数据方面,AG方法和DL方法二者整体差别不大,而在部分像元的处理上AG方法表现出更好的拟合效果;MSL方法和SL方法相比于AG方法和DL方法其效果更为突出;在地形气候复杂,植被指数噪声较多的山区,MSL方法表现出更好的适用性。  相似文献   

12.
J Cladera  J Torres    E Padrós 《Biophysical journal》1996,70(6):2882-2887
The conformation of bacterioopsin in the apomembrane has been studied by Fourier transform infrared spectroscopy. Resolution enhancement techniques and curve-fitting procedures have been used to determine the secondary structural components from the amide I region. Bacterioopsin contains about 54% helicoidal structure (alpha I and alpha II helices + 3(10) turns), 21% sheets, 16% reverse turns, and 9% unordered structure. Thus, after retinal removal, all of the secondary structural types of bacteriorhodopsin remain present, and only slight quantitative differences appear. On the other hand, H/D exchange studies show that there is a higher degree of exchange for reverse turns and protonated carboxylic lateral chains in bacterioopsin as compared to bacteriorhodopsin. This gives further support to the idea of a more open tertiary structure of bacterioopsin, and to the consideration of the retinal molecule as an important element in complementing the interhelical interactions in bacteriorhodopsin folding.  相似文献   

13.
Dominant genetic markers such as AFLPs and RAPDs are usually analyzed based on the presence or absence of a band on an electrophoretic gel. This type of analysis does not allow a distinction among dominant homozygotes and heterozygotes. Such a distinction is possible based on the quantitative measurement of band intensities. In the present paper, we consider the problem of analyzing dominant markers based on band-intensity data. The basic step for mapping a marker is to assess its recombination frequency with other markers. Ordering markers on a map can then be done using a number of standard procedures. For this reason estimation of the recombination frequency is the main focus of the present paper. The method is demonstrated for the case of an F2 population. By simulation we investigate its accuracy and compare it to the standard estimation based on dominant scoring for band presence/absence. There are a number of potential applications. For example, the map may be used to locate quantitative trait loci (QTLs), applying standard procedures modified to account for uncertainty of the marker genotype. Moreover, map information can be used to determine the most likely genotype at a marker, given its band intensity and the band intensities at flanking markers. Received: 2 May 2000 / Accepted: 6 December 2000  相似文献   

14.
The standard approach to the definition of the physical quantities has not produced satisfactory results with the concepts of information and meaning. In the case of information we have at least two unrelated definitions, while in the case of meaning we have no definition at all. Here it is shown that both information and meaning can be defined by operative procedures, but it is also pointed out that we need to recognize them as a new type of natural entities. They are not quantities (neither fundamental nor derived) because they cannot be measured, and they are not qualities because are not subjective features. Here it is proposed to call them nominable entities, i.e., entities which can be specified only by naming their components in their natural order. If the genetic code is not a linguistic metaphor but a reality, we must conclude that information and meaning are real natural entities, and now we must also conclude that they are not equivalent to the quantities and qualities of our present theoretical framework. This gives us two options. One is to extend the definition of physics and say that the list of its fundamental entities must include information and meaning. The other is to say that physics is the science of quantities only, and in this case information and meaning become the exclusive province of biology. The boundary between physics and biology, in short, is a matter of convention, but the existence of information and meaning is not. We can decide to study them in the framework of an extended physics or in a purely biological framework, but we cannot avoid studying them for what they are, i.e., as fundamental components of the fabric of Nature.  相似文献   

15.
Neural responses are known to be variable. In order to understand how this neural variability constrains behavioral performance, we need to be able to measure the reliability with which a sensory stimulus is encoded in a given population. However, such measures are challenging for two reasons: First, they must take into account noise correlations which can have a large influence on reliability. Second, they need to be as efficient as possible, since the number of trials available in a set of neural recording is usually limited by experimental constraints. Traditionally, cross-validated decoding has been used as a reliability measure, but it only provides a lower bound on reliability and underestimates reliability substantially in small datasets. We show that, if the number of trials per condition is larger than the number of neurons, there is an alternative, direct estimate of reliability which consistently leads to smaller errors and is much faster to compute. The superior performance of the direct estimator is evident both for simulated data and for neuronal population recordings from macaque primary visual cortex. Furthermore we propose generalizations of the direct estimator which measure changes in stimulus encoding across conditions and the impact of correlations on encoding and decoding, typically denoted by Ishuffle and Idiag respectively.  相似文献   

16.
The acid-base behavior of amino acids is an important subject of study due to their prominent role in enzyme catalysis, substrate binding and protein structure. Due to interactions with the protein environment, their pKas can be shifted from their solution values and, if a protein has two stable conformations, it is possible for a residue to have different “microscopic”, conformation-dependent pKa values. In those cases, interpretation of experimental measurements of the pKa is complicated by the coupling between pH, protonation state and protein conformation. We explored these issues using Nitrophorin 4 (NP4), a protein that releases NO in a pH sensitive manner. At pH 5.5 NP4 is in a closed conformation where NO is tightly bound, while at pH 7.5 Asp30 becomes deprotonated, causing the conformation to change to an open state from which NO can easily escape. Using constant pH molecular dynamics we found two distinct microscopic Asp30 pKas: 8.5 in the closed structure and 4.3 in the open structure. Using a four-state model, we then related the obtained microscopic values to the experimentally observed “apparent” pKa, obtaining a value of 6.5, in excellent agreement with experimental data. This value must be interpreted as the pH at which the closed to open population transition takes place. More generally, our results show that it is possible to relate microscopic structure dependent pKa values to experimentally observed ensemble dependent apparent pKas and that the insight gained in the relatively simple case of NP4 can be useful in several more complex cases involving a pH dependent transition, of great biochemical interest.  相似文献   

17.
Neuron models, in particular conductance-based compartmental models, often have numerous parameters that cannot be directly determined experimentally and must be constrained by an optimization procedure. A common practice in evaluating the utility of such procedures is using a previously developed model to generate surrogate data (e.g., traces of spikes following step current pulses) and then challenging the algorithm to recover the original parameters (e.g., the value of maximal ion channel conductances) that were used to generate the data. In this fashion, the success or failure of the model fitting procedure to find the original parameters can be easily determined. Here we show that some model fitting procedures that provide an excellent fit in the case of such model-to-model comparisons provide ill-balanced results when applied to experimental data. The main reason is that surrogate and experimental data test different aspects of the algorithm’s function. When considering model-generated surrogate data, the algorithm is required to locate a perfect solution that is known to exist. In contrast, when considering experimental target data, there is no guarantee that a perfect solution is part of the search space. In this case, the optimization procedure must rank all imperfect approximations and ultimately select the best approximation. This aspect is not tested at all when considering surrogate data since at least one perfect solution is known to exist (the original parameters) making all approximations unnecessary. Furthermore, we demonstrate that distance functions based on extracting a set of features from the target data (such as time-to-first-spike, spike width, spike frequency, etc.)—rather than using the original data (e.g., the whole spike trace) as the target for fitting—are capable of finding imperfect solutions that are good approximations of the experimental data.  相似文献   

18.

Cyanobacteria are photosynthetic prokaryotes that can fix atmospheric CO2 and can be engineered to produce industrially important compounds such as alcohols, free fatty acids, alkanes used in next-generation biofuels, and commodity chemicals such as ethylene or farnesene. They can be easily genetically manipulated, have minimal nutrient requirements, and are quite tolerant to abiotic stress making them an appealing alternative to other biofuel-producing microbes which require additional carbon sources and plants which compete with food crops for arable land. Many of the compounds produced in cyanobacteria are toxic as titers increase which can slow growth, reduce production, and decrease overall biomass. Additionally, many factors associated with outdoor culturing of cyanobacteria such as UV exposure and fluctuations in temperature can also limit the production potential of cyanobacteria. For cyanobacteria to be utilized successfully as biofactories, tolerance to these stressors must be increased and ameliorating stress responses must be enhanced. Genetic manipulation, directed evolution, and supplementation of culture media with antioxidants are all viable strategies for designing more robust cyanobacterial strains that have the potential to meet industrial production goals.

  相似文献   

19.
The steady state velocity equation for a bireactant enzyme in the presence of a partial inhibitor or nonessential activator, M, contains squared substrate concentration and higher-ordered M concentration terms. The equation is too complex to be useful in kinetic analyses. Simplification by the method of Cha (J. Biol. Chem. 243, 820–825 (1968)) eliminates squared substrate concentration terms, but retains higher-ordered terms in [M]. It is shown that if strict equilibrium is assumed between free E, M, and EM and for all but one other M-binding reaction, a velocity equation is obtained for an ordered bireactant enzyme that is first degree in all ligands in the absence of products. The equation is an approximation (because it was derived assuming only one M-binding reaction in the steady state), but it contains five inhibition (or activation) constants associated with M, all of which can be obtained by diagnostic replots and/or curve-fitting procedures. The equation also provides a framework for obtaining limiting constants (V1max, K1ia, K1mA,K1mB) that characterize the enzyme at saturating M. The same approach is applicable to an enzyme that catalyzes a steady state ping pong reaction.  相似文献   

20.
《Cryobiology》2016,72(3):384-390
Cryopreservation is a technique that has been extensively used for storage of multipotent mesenchymal stromal cells (MSCs) in regenerative medicine. Therefore, improving current cryopreservation procedures in terms of increasing cell viability and functionality is important. In this study, we optimized the cryopreservation protocol of MSCs derived from the common marmoset Callithrix jacchus (cj), which can be used as a non-human primate model in various pathological and transplantation studies and have a great potential for regenerative medicine. We have investigated the effect of the active control of the nucleation temperature using induced nucleation at a broad range of temperatures and two different dimethylsulfoxide concentrations (Me2SO, 5% (v/v) and 10%, (v/v)) to evaluate the overall effect on the viability, metabolic activity and recovery of cells after thawing. Survival rate and metabolic activity displayed an optimum when ice formation was induced at −10 °C. Cryomicroscopy studies indicated differences in ice crystal morphologies as well as differences in intracellular ice formation with different nucleation temperatures. High subzero nucleation temperatures resulted in larger extracellular ice crystals and cellular dehydration, whereas low subzero nucleation temperatures resulted in smaller ice crystals and intracellular ice formation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号